Let the learned weights after the perceptron algorithm finishes training be weight vector W. Suppose that the bias term is 0. If we scale W by a positive constant factor (multiply each element of W with c), then the new set of weights will
produces the exact same classification for all the data points
may output different classification results for some data points
Cannot be decided
First of all, in the perceptron algorithm( 1 neuron ), we learned that it is nothing but LOGISTIC REGRESSION.
Therefore, W will be the slope of the line ( in 1 feature case ).
Thus, if the separating hyperplane is : y = 2x + 0
and I multiply it by c ( lets take c = 3) , the new equation becomes y = 6x .
Will it not change the decision boundary ??
Can you please explain !