Sir told to create a validation set.
What I did was, I took every 5th image of the dataset and put it in the validation data. The rest were a part of the Training Data.
There are 4 classes for this dataset :
[ humans, dogs, cats, horses ]
For each class, there are :
162 images for training and
40 images for validation
Code Snippet for Validation Generator:
val_generator = train_gen.flow_from_directory(
directory=“Val_Images/”,
target_size=(150, 150),
batch_size = 20,
class_mode = ‘categorical’
)
How is the validation accuracy evaluated if we provide the validation data batch wise ??
The thing, which I understood by this approach is:
Training data is provided in batches and the parameters are updated. This approach is similar to the mini-batch gradient descent.
But how is the validation data passed and what’s the working behind it ?