In the earlier post about the "History of Activation", I had started it off with "step" activation function. This was essentially a sincere attempt from the AI community to mimic the functioning of human brain and how it fires up only few of neurons at a particular point of time
As we saw in that post, the single component in Artificial neural network - the "activation" function alone had undergone phenomenal research and painstaking efforts to bring a collective wisdom over a period of time. This has lead to multiple options of activation function that we have today.
Similarly there are many other components too which together makes up what we call as artificial neural network of today.
Lets go over these components of an artificial neural network model
What constitutes a neural network is essentially an input layer followed by multiple hidden layers to process the data to provide the output in the final layer which is obviously called output layer. The entire processing that happens to provide an output is is referred as "forward pass" . It is this component which gives the basic definition of a neural network model should operate.
"Activation function" - as we saw in the earlier post - essentially helps to transforms the input features in non-linear terms. Each activation function transforms the data / feature inputs in a certain manner and helps the model to get the desired output.
We have an integral component called Backward pass which acts like a conductor of a music opera. The scope of this functionality starts off after a comparison of the actual output with the expected output which determines the cost of the first iteration. This component helps to modify the parameters and hyper parameters of the model by providing useful inputs through process of backward propagation like gradient descent. The goal of backward pass component is to reduce the cost by enabling learning through an iterative process.
There are of course optimization algorithm that guide the learning based on the feedback loop designed through backward pass. The modification to design of model - the number of layers, count of neurons in each layer, the fine-tuning the parameters and hyper-parameters essentially guide the learning process.
Finally, the entire structure is essentially designed back to provide a meaningful solution to a problem so it is the data which is THE component that makes sense to entire model without which there is really no purpose to a neural network model !
To summarize
-
Forward pass gives structure.
-
Activation functions give expressiveness.
-
Backward pass enables learning
-
Optimization algorithms guide learning.
-
Data brings meaning.
Today we are creating more complex and capable models with the strength of our knowledge and understanding of all the components above but then, we are far away from where we started - the biological neuron's functioning.
The biological neuron in human brain remain still mysterious. The neuroscientists still have no idea what all a single neuron is capable of doing. What ever they can define with neuroscience is still inadequate to explain the complexity in a single neuron's design. In other words, how neurons learn is still a black-box !
The crude reality today is that we have built a complex maze of the neural network models which are working quite well and also improving day by day. But the fact remains that we neither have the time nor the capability to unzip the mystery of a single neuron. Life goes on with priorities in our hand and which are urgent - isn't it ?
No comments:
Post a Comment