Wrong value for theta

i wrote the same code, according to me… and I got a wrong value for the first theta (theta o)and for all others, I am getting zero as answer. can you help me out?

hey @mehta.rashita18 ,
Kindly share me a link to your code and other files required . So that i can understand your approach and find what have you implemented wrong

this is my code and the train files are used from sklearn…just check why am I getting wrong answer for all thetas.

hey @mehta.rashita18 ,
can you please upload your code on github and share with me its link ,as i need to run and test it , so it won’t be feasible with images .
Thank You :slightly_smiling_face:

can i send the code in the text format here? as I currently dont know how to upload code on github.
i can send the code as text here.

import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_boston
boston=load_boston()
X=boston.data
Y=boston.target
u=np.mean(X,axis=0)
std=np.std(X,axis=0)
X=(X-u)/std
a=np.ones((X.shape[0],1))
X=np.hstack((a,X))
print(X.shape)
def hypothesis(theta,x):
n=x.shape[0] #n->number of features
y_=0.0
for i in range(n):
y_+=(theta[i]*x[i])
return y_
def error(X,Y,theta):
e=0.0
m=X.shape[0] #number of examples
for i in range(m):
y_=hypothesis(theta,X[i])
e+=(y_-Y[i])**2
return e/m
def gradient(theta,X,Y):
m,n=X.shape
grad=np.zeros((n,))
for i in range(n):
for j in range(m):
y_=hypothesis(theta,X[j])
grad[i]+=(y_-Y[j])*X[j][i]
return grad/m
def gradient_descent(X,Y,learning_rate=0.1,epoch=300):
m,n=X.shape
theta=np.zeros((n,))

error_list=[]
for i in range(epoch):
    e=error(X,Y,theta)
    error_list.append(e)
    grad=gradient(theta,X,Y)
    for j in range(n):
        theta[j]=theta[j]-learning_rate*grad[j]
return theta,error_list

theta,error=gradient_descent(X,Y)
print(theta)
plt.style.use(“seaborn”)
plt.plot(error)

hey @mehta.rashita18,
what is the problem ? I just ran your code and it worked absolutely fine without any error.
is there something you can explain me about ?

it is not showing any error. but the values of theta are coming out to be zero, which is wrong…

no it isn’t giving theta values to be zero.
it is providing

I dont know why this is happening…it is showing right values when i am writing the whole code in a single cell in jupyter notebook…when I wrote the whole code separately in different cells, it is giving me zero as values…
Thankyou for your time…and if you know why this happening then plz do reply…

okay ,
just do one thing for me .
save you jupyter notebook after running it ( the way show me the error or zero values ) , open google colaboratory on chrome , upload that jupyter notebook and share with me its link.

okay sir, here is the link to the colaboratory giving zer values to thetaa…have a ;look plz and if you find any solution to the issue, please do reply

https://colab.research.google.com/drive/1Ad4UShXmJJNhGuWfkLFlSiLYNs_cdyYN?usp=sharing

hey @mehta.rashita18 ,
sorry for so late to reply , it just took me time to find this .
In your grad function , you have placed your return value inside the for loop , which was supposed to be outside of it . Hence , your model was only working on the first column and not others.

have a look at this code
I have changed lot a bit of things and made it form of a class as sklearn provides , might be helpful to you.

Thank You and Happy Learning :slightly_smiling_face:.

Thank you so much sir…and sorry i made such a bad mistake… and pestered you for so long…Thanks for your time!

no problem @mehta.rashita18 ,
I would require a bit of your time to kindly mark this doubt as resolved and also do provide your valuable feedback as it would help us to provide with better learning experience.

Thank You and Happy Learning :slightly_smiling_face:.

Already done!! :smiley: