I have completed my our second assignment and here is the link , it is functioning right .
I tried to implement the convergence condition as follow :
def gradient_descent(x,y,learning_rate = 0.01):
m = x.shape[1] + 1
theta = np.zeros((m,))
e = error(x,y,theta)
de =e
error_list = [e]
itr = 0
while(de != 1.0):
grad = descent(x,y,theta)
theta = theta + learning_rate*grad
ne = error(x,y,theta)
error_list.append(ne)
de = ne - e
e = ne
itr += 1
return error_list,theta
But when i try to run this logic , it just keeps on processing with no output . Any help regarding this?
Regards
Dinesh