What is the need of scaling pixel values to the range (0,1) when they already have the same scale of (0,255) ?
Scaling Pixel Values to [0,1)
hey @akshaydagar98,
It is a traditional and good practice to normalize the data and one of the reasons behind it is that
- Treat all images in the same manner: some images are high pixel range, some are low pixel range. The images are all sharing the same model, weights and learning rate. The high range image tends to create stronger loss while low range creates weak loss, the sum of them will all contribute to the backpropagation update. But for visual understanding, you care about the contour more than how strong is the contrast as long as the contour is reserved. Scaling every image to the same range [0,1] will make images contribute more evenly to the total loss. In other words, a high pixel range cat image has one vote, a low pixel range cat image has one vote, a high pixel range dog image has one vote, a low pixel range dog image has one vote… this is more like what we expect for training a model for dog/cat image classifier. Without scaling, the high pixel range images will have a large amount of votes to determine how to update weights. For example, the black/white cat image could be a higher pixel range than a pure black cat image, but it just doesn’t mean the black/white cat image is more important for training.
If this helps ! Mark the doubt as resolved and rate your experience.