As Prateek Bhaiya told to implement the gradient descent for multivariable linear regression as homework, I am giving a link to my code.
I have also implemented a mini-batch gradient descent algorithm in the same code. Please check that also.
Please check this Home Work Assignment
Try plotting the error and the accuracy graph with the number of epochs. They’ll give you a good estimation of how your algorithm is doing. If there’s something that doesn’t seem right, feel free to ask.
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.
Actually I noticed that when I vary the learning factor the accuracy of the algorithm is varying drastically. For example when I use eta=0.001 the algorithm gives around 96% accuracy but when I use it to 0.0001 or 0.01 the accuracy is less than 50%.
So should this happen?
How to choose the right learning rate?
And what is epochs?
hi sagnik ;
we decide by tracing the error plot.
if the error graph is oscillating this means u need to decrease the learning rate.
if error has reached the minimum constant value for 10 to 20 epochs this means we have reached minima and dont need any further training epochs.
Can you suggest me a code snipped to incorporate the epochs concept while checking the error plot?
sagnik,
we have to manually check if the minimun difference in error from two consecutive epochs is less than our desired threshold the we have to stop epochs;
so calculate and store error in regression;
then check difference between current and previous error if error is less than 0.001(assuming) stop epochs.
if(error[i]-error[i-1]>threshold)
continue epochs
else
stop