i don’t understand the role of taking the learning rate and maximum iterations in gradient descent function.
Linear regression
Gradient descent is the approach in which don’t get the optimal solution directly, we take multiple jumps to reach the optimal solution, which can be considered same as number of epochs.
Learning rate is the factor by which we multiply jump distance (theoretically gradients calculated) so that jump distance doesn’t become that large, that we skip our optimal solution and jump on other part.
Hope this cleared your doubt.
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.