why should we maximize this loss
LOgisitic regression -gradient descent
Hey Abhishek,
We are not maximizing the loss.
Instead, we are maximizing the Log Likelihood Estimation. which is
sum[yi*log(h_theta_x) + (1-yi)*log(1-h_theta_x)]
Since we are maximizing this quantity, so let’s take negative of this, and we call it Cost Function.
now we want to minimize the cost function…
I hope clears your doubt
Thanks
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.