Doubt in gradient descent

def gradientdescent(xtrn,ytrn,lrnrate=0.1,maxitr=100):
theta=np.zeros((2,))
errorlist=np.array([])
thetalist=np.array([])

for i in range(maxitr):
    grad=gradient(xtrn,ytrn,theta)
    errorlist=np.append(errorlist,error(xtrn,ytrn,theta))
    thetalist=np.append(thetalist,theta)
    
    theta[0]=theta[0]-lrnrate*grad[0]
    theta[1]=theta[1]-lrnrate*grad[1]
    
return theta,errorlist,thetalist

how , I can find out theta without maxitr which is 100 in my code ?..

Are you thinking of calculating theta values directly?,
If yes than you should watch the closed form solution of linear regression in course videos, than you will be able to calculate thetas values directly.

If your question is different, than plz brief more about it.

No, I am asking
how can i modify my code , so that without maxitr parameter
I can stop my loop when change in theta become zero or negligible …

#MINERROR = 0.001 (You can define your own value as well)

while(True):
grad=gradient(xtrn,ytrn,theta)
errorlist=np.append(errorlist,error(xtrn,ytrn,theta))
thetalist=np.append(thetalist,theta)

theta[0]=theta[0]-lrnrate*grad[0]
theta[1]=theta[1]-lrnrate*grad[1]

if(errorlist.shape[0]>=2 && abs(errorlist[-2]-errorlist[-1])<=MINERROR):
    break
1 Like

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.