Log likelihood, problem in concept

prateek bhaiya has used negative of log likelihood in the code but has used gradient ascent , i am confused. we should use gradient descent to minimize -ve of log likelihodd

Yes @sagartiwari1711, we need to use gradient descent and hence that is why the negative sign is used in code.

Hope this helps :slight_smile:

Feel free to ask me again anytime.

but mentor has used -ve sign in loss function, but used gradient ascent

w = w + learning_rategrad_w
b = b + learning_rate
grad_b

If you would look at the code again, the grad_w and grad_b have been multiplied by (-1) when the function is called which ultimately makes it

w = w - (learning_rate * grad_w)
b = b - (learning_rate * grad_b)

its + only where is -1
i don’t uderstand

Look at this function in the code :

def get_grads(y_true,x,w,b) :


grad_w += (-1)( y_true [i] - hx ) (x[i])
grad_b += (-1) * (y_true[i] - hx)


return [grad_w , grad_b]

Here is the code from coding blocks repo, bhaiya put -1 and edited later in the video
def get_grads(y_true,x,w,b):

grad_w = np.zeros(w.shape)
grad_b = 0.0

m = x.shape[0]

for i in range(m):
    hx = hypothesis(x[i],w,b)
    
    grad_w += (y_true[i] - hx)*x[i]
    grad_b +=  (y_true[i]-hx)
    

grad_w /= m
grad_b /= m

return [grad_w,grad_b]
1 Like

So now is your doubt clear ? The code in the video is correct.

Capture
bhaiya updated at 9:12 in the video itself

Please look at the chat in your inbox.