Logistic Regression Wieght and Bias

In Implementation of Logistic Regression , While Generating W and B . Why we multiplied them with 2 and 5 ,… As , we are generating random numbers only…

Hey Shubham,
As we know the np.random.randint function generates a float point number between 0 and 1.The weights and bias which initialise in this step would eventually be our initial starting point before gradient descent starts, i.e., the initial equation of our Decision Boundary.

If we begin with close to zero (or zero) values of these weights then our gradient descent algorithm requires large strides to reach convergence. Thus, we shift these values towards a more expected value of the slope and intercept (in this case 2 and 5 respectively) after observing our data, so that we reach convergence early and in higher scenerios it’s done to ensure that we don’t get stuck up in a local minima.

I hope this resolves your doubt.

1 Like

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.

1 Like