Why are we writing m=X.shape[0] . (Time :21:08 )
Why are we writing m=X.shape[0] . (Time :21:08 )
hey @shatakshisingh2k ,
While making updates to weights of any neural network .
We need to small steps , so that the weights doesn’t get disturbed large and cause our model to perform poorer. So for that , we divide it by m , the number of samples in the input we have.
In this way , you can think it as an averaging step to update the weights value with a small value so that the model can understand and learn our task more easily and perform better by reaching convergence faster.
I hope this helped you .
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.