In the videos for logistic regression, bhaiya told, that the loss function is the -ve of the log of the likelihood function? I don’t get it how can we say that !
Doubt in calculating loss function for logistic regression?
Hello @Mohit_Swain,
In statistics, the likelihood function measures the goodness of fit of a statistical model to a sample of data for given values of the unknown parameters and it is given by:
L = Py(1 - P)1 - y, where P is predicted probability and y is true probability.
The objective of logistic regression is to maximize this Likelihood. Or intuitvely, increasing the goodness of fit of our model to our training(sample) data.
Now as these are multiplication of probabilities, we take the log to convert it into sums of probabilities. Here we have log(L) instead of L. Now, maximizing log(L) is minimizing -log(L), which is,
-log(L) = - ylog( p ) - (1 - y)log( 1 - p )
Hence the name, negative log likelihood function.
There is another interpretation for the above loss function revolving around cross-entropy. You can check about that as well.
Happy Learning
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.