In SVM, why are we doing random shuffling?

While calculating loss in SVM, we are randomly shuffling first. Why?

Hey @samriddhijain2000, its optional in this case, but generally when datasets are given, if there are 1000 training examples, first 500 belong to -1 class and next 500 will belong to 1 class. So in this case when we create batches, sequentially than the training would not be as goog as there will be example of only one class in the batch.

So its better to always shuffle the data.
Hope this cleared your doubt. :blush:
Plz mark the doubt as resolved in my doubts seciton.