I don't understand the use of error function in this function.Are we iterating over the first 100 points and calculating the error of each point?

def gradientDescent(X,Y,max_steps=100,n=0.1):

theta = np.zeros((2,))

error_list = []
for i in range(max_steps):
    grad = gradient(X,Y,theta)
    e = error(X,Y,theta)
    error_list.append(e)
    theta[0] = theta[0] - n*grad[0]
    theta[1] = theta[1] - n*grad[1]
    
return theta,error_list

Hey @Varunsh_20, in the error function we are basically iterating over our whole dataset and then adding the error for the predictions our model has made as compared to the actual values i.e. y. The average error is returned by the error function.
This error function is called as many times as you set the value of max_steps( ).

I hope this helps you understand the concept :slight_smile:
Happy Learning :slight_smile: