In an MLP, we say that the output of one layer in a neural network becomes the input of next layer in that neural network, now the output of a particular neuran is passed through the activation function, which actually maps the output in range 0 to 1 or -1 to 1 , so is it means that these activated value of previous layer becomes input of next layer in MLP??
Activation Function
Hey shalini,
You are very correct with your statement.
The inputs are first mutiplied by Weights Then a bias is added which form Z value , Z = X*W + b
Then the activation function is applied to that layer or neuron( if we talk about single single neurons). Then these activated values are passed as the inputs to the next layer or neurons. Adding more number of layers, also means adding non-linear function multiple times (relu , tanh) that makes the model complex or learn the non-linear function in neurons.
I hope this helped you.!!
Thanks
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.