Convergence Condition based upon error

I have completed my our second assignment and here is the link , it is functioning right .

Assignment 2

I tried to implement the convergence condition as follow :

def gradient_descent(x,y,learning_rate = 0.01):
    m =  x.shape[1] + 1
    theta = np.zeros((m,))
    e = error(x,y,theta)
    de =e
    error_list = [e]
    itr = 0
    while(de != 1.0):
        grad = descent(x,y,theta)
        theta = theta + learning_rate*grad
        ne = error(x,y,theta)
        error_list.append(ne)
        de = ne  - e
        e = ne
        itr += 1
        
    return error_list,theta 

But when i try to run this logic , it just keeps on processing with no output . Any help regarding this?

Regards
Dinesh

actually we will use magnitude of error or change in error as the measure of convergence.
like if error is really small like .0001 or change in parameters is very less than we can conclude that we are near or at local minima.however the best way is to plot the graph of error and no of iterations to see if ur model is working correctly or not.

it goes in infinte loop because de never becomes 1.

Yeah i checked it for 1000 iterations , and cross checked theta with sklearn linearRegression class . My algorithm is working fine . But since the assignment asks us to implement the error based convergence conditions i was confused what is the error in the code.

while condition is wrong i guess.it makes ur loop run infinite times.
maybe de var is not reaching 1.

Yeah , Will there be webinar on assignment discussion or a lecture?