We want our machine learning classifiers to return a prediction with confidence. If we train a model with dog breeds, and ask to classify a photo of cat, we want the model to return a prediction with low confidence. Bayesian and probabilistic approaches to machine learning offer an approach to measure model confidence. We discuss how can deep learning tools represent model uncertainty. I will explore a framework that can cast dropout training in neural networks as approximate Bayesian inference and introduce Bayesian convolutional neural networks that can give model uncertainty in deep learning. Using this uncertainty information, we present a novel approach towards information theoretic active learning in a deep learning framework. Active learning in a deep learning setting can achieve the goal towards data-efficiency in machine learning, and our approach is the first step towards this direction. The practical impact of the framework will be demonstrated through image classification tasks.