Scaling ( Quiz )

Let the learned weights after the perceptron algorithm finishes training be weight vector W. Suppose that the bias term is 0. If we scale W by a positive constant factor (multiply each element of W with c), then the new set of weights will

produces the exact same classification for all the data points

may output different classification results for some data points

Cannot be decided

First of all, in the perceptron algorithm( 1 neuron ), we learned that it is nothing but LOGISTIC REGRESSION.
Therefore, W will be the slope of the line ( in 1 feature case ).

Thus, if the separating hyperplane is : y = 2x + 0
and I multiply it by c ( lets take c = 3) , the new equation becomes y = 6x .

Will it not change the decision boundary ??

Can you please explain !

hey @mananaroramail ,
If the value of bias is 0 , then you modify the weights by multiplying by a constant then the results wont change. The decision boundary will still be the same.
But if there is a bias , then it will get changed. And that change is not predictable , for that we need to experiment more and more.

I hope you got this.
Thank You :slightly_smiling_face:.

Ok, I got the intuition.

Thanks !

No problem.

I would request you to kindly mark this doubt as resolved and also to provide your valuable feedback .
Thank You :slightly_smiling_face:.