Elaboration Please?

Could someone breakdown as to what is being achieved in this videos, and how it can be implemented as a code?

Hi @Vicky_08012000,

If you didn’t understand the concept of MLE, i would recommend you to watch the video again, Bhaiya has clearly stated, what is it’s purpose and how can we use it…

So, to summarize -

Our ultimate aim is to get the parameters thetas from the data. Now we use a loss function i.e Mean Squared error loss to minimize using gradient descent to get these thetas . But how did we concluded that this Mean Squared error loss will give the best loss for regression.

To prove this, we used the Maximum likelihood estimation.

We can say that either decrease the loss or maximize the likelihood for the occurring of event.
Please see the derivation in the video step by step.

So, from this MLE principle we derived the MSE loss.

Implementation part :
We don’t implement MLE from the first step, instead we derive some formula that are generated from MLE principle and then implement those formulas.

So finally from (MLE) principle we got the ( Mean square error) and from this error function we can derive one formula to get theta’s.
theta = (X.T X)^-1 . (X.T Y)
This is called normal equation or closed form solution.

I would also recommend you to check out this blog: https://medium.com/quick-code/maximum-likelihood-estimation-for-regression-65f9c99f815d

1 Like

@mohituniyal2010 you are a life saver!!. Thanks a lot

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.