Doubt in logistic Regression

In case of logistic regression, why do we have to maximize loss function ?

Hey @abhaygarg2001, if you would see the video once again , you would observe that sir has differentiated the log likelihood function i.e. :

log_likelihood = y(log y) + (1-y)(log(1-y))

Now maximum likelihood is something we need to maximise. But we took it’s negative sign , which is equivalent to minimizing the loss function :

loss = - (log_likelihood) 
       - y(log y) - (1-y)(log(1-y))

Hope this helps.
Happy coding :slight_smile: