In svm implementation, what is the batch size?

What is the significance of the batch size and how is it different from the epochs?
please help understand this conceptually.

Hey @settingsingh, Usually we have very huge datasets which we pass through our models, these datasets have a huge RAM and memory requirement. Thus, to avoid memory based errors and for faster computation we pass our data through our ML Models in form of batches.

So basically, our entire data is passed in form of batches (a group of 32 or 64 or 128… depending upon the data size).

So now for every epoch instead of making weight updation a single time, we will update weights
(total size of data)/(batch_size) number of times.

I hope this resolves your doubt.
Plz mark it as resolved as well :blush: