Deep Learning

The term, Deep Learning, refers to training Neural Networks.

Let's start with the Housing Price Prediction example.

Let's say we have data sets with six houses. We know the size of the houses and we know the price of the house and we want to fit a function to predict the price of the houses(the function of the size). So by linear regression, putting a straight line to these data will do the job like this.

What is a Neural Network?

We know that prices can never be negative. So in the above figure instead of the straight-line fit which eventually will become negative, the line ends up to the zero. So the blue line ends up being our function for predicting the price of the house as a function of this size.

We have 'size of house' as the input to the neural network the which one we call 'x'. It goes into the little circle and then it outputs the price which is 'y'. So the little circle is a single neuron in a neural network.

The neuron computes this linear function after taking the input size then takes a max of zero, and then outputs the estimated price.

This is the simplest Neural Network example.

To make it little be complex let's assume the price of the house is dependent on other factors also. Like

  • 'Family Size' depends on 'size of the house' and 'No. of bedrooms'.

  • 'Walkability' depends on the address of the house i.e is the 'zip-code'.

  • 'School quality' also depends on address i.e. the 'zip-code' and 'wealth'.

So as a whole the 'price of the house' now depends on all these factors.

In the above figure,

  • The layer created by 'Size', 'No. of bedrooms', 'ZipCode', 'Wealth' is known as the Input Layer of this neural network.

  • The layer created by 'Family size', 'Walkability', 'School Quality' is known as the Hidden Layer of the neural network.

  • 'Price' is the Output of the neural network.

So what we actually implement is the following:

So for example, rather than saying the first nodes represent 'family size' and it depends only on the features 'x1' and 'x2', we're going to say, "well neural network, you decide whatever you want this known to be and we'll give you all four of the features to complete whatever you want".

Every input feature is connected to every one of these circles in the middle and the remarkable thing about neural networks is that, given enough data about x and y, given enough training examples with both 'x' and 'y', neural networks are remarkably good at figuring out functions that accurately map from 'x' to 'y'.

Why is Deep Learning Taking off ?

There are multiple reasons for which deep learning is excelling over traditional machine learning algorithms.

From the above figure, we can see,

  • For traditional learning algorithms as the amount of data gets increased, the performance gets saturated after a certain point. So the peak performance from the model of machine learning cannot be achieved.

  • But for Neural Networks, as we keep going increasing the amount of data, the performance gets better as well. So to achieve the peak performance from the machine learning model using deep learning we need to feed lots of data to the neural network.

Here's where deep learning shines. It can perform much better than traditional machine learning algorithms when it comes to nurture with lots of data.

About Me

I'm Rajarshi Bhadra. Pursuing Electrical Engineering from Maulana Abul Kalam Azad University of Technology. I'm a Data Science enthusiast and working on several projects of Machine Learning, Deep Learning, Artificial Intelligence, and Image Processing.