The massive evolution in the world of data (especially in the last 3-4 years) have really put into hindsight how powerful machines can become in making decisions based completely on facts and figures that have been around for centuries- a feat not at all possible with any amount of human effort. This processing and understanding of data to understand what exactly it is trying to convey have led to a bunch load of fields (study) that are each, individually, making spectacular breakthroughs in order to make the world a better place. One such field is what has seen its success under the name of Deep Learning. But what exactly is it? Well, let’s try and find out.

Deep learning in itself is a smaller part of an even bigger field of study and research- machine learning or ML for short. The very backbone of deep learning is to make use of highly sophisticated algorithms that work on a framework whose structure and concept is entirely derived and synonymous to the brain of the human body. As such, it is more than understood that the heart of these frameworks has to be similar to neurons in a lot of ways- just in the way that neurons are the heart of our entire nervous system. This framework in its entirety is what we refer to as an artificial neural network (ANN for short).

It is these same neural networks that are responsible for making revolutionary advances and discoveries in the field of artificial learning and machine learning. These networks are sluggishly slow at the time of their inception just like the mind of a newly born baby- completely devoid and unaware of the workings of the world. Exposing them to real-life data (facts and figures) are what fine-tunes their accuracy in order to do the highly sophisticated and advanced jobs that are required of them. These neural networks, just like the human brain, work best when they learn from real-time and real-life experiences. Once the network and its associated model reach the desired levels of precision, it is really fun and intriguing to see them at work.


Deep Learning 101 is all about understanding the very basic terms associated with it (and their meaning as well). Some of these terms include-

  1. Neural Network

As discussed earlier, neural networks (artificial) are the backbone of deep learning. In theory, an ANN may be defined and visualized as various interconnected neurons (artificial) which exchange data amongst themselves. If the meaning and understandability of this data are more than the learned experience of a neuron, it results in the neuron getting updated in terms of knowledge and experience, and if it is the other way around, the neuron very simply processes the data as per its experience and returns some result.

  1. CNN (Convolutional Neural Network)

Used exclusively in DIP, a CNN involves the use of multiple independent filters (nothing but square matrices) over a multi-channeled image in order to extract some contrasting and distinct features from an image.

  1. RNN (Recurrent Neural Network)

In very simple terms, an RNN is used for processing sequential information wherein the previous sets of outputs can be used in order to predict the next set of outputs based on a set of completely new data. The best example to understand this would be the automatic recommendations one gets on platforms such as Amazon, Netflix, Spotify etc.

Source by Shalini M