Hey @gautam75, I guess there’s a common answer to both of your questions. So let me explain you the concept here. Image quality is affected by different types of quality factors (such as resolution, noise, contrast, blur, compression). Resolution of the image affects the visual information of the image. Image with higher resolution contains more visual information details while an image with lower resolution contains less visual details. Convolutional neural network (CNN) based image classifiers always take input as an image, automatically learn image features and classify into its output class. If input image resolution varies, then it hinders classification performance of CNN based image classifier. The experimental result shows that degradation in image resolution from higher to lower decreases performance score (accuracy, precision and F1 score) of CNN based Image classification.
But again you can’t just increase the resolution of an image to increase the accuracy of CNN. You need to see the computation power that you are having. Your GPU/CPU should be able to handle all the images of that large resolution.
It really depends on the size of your network and your GPU. You need to fit reasonably sized batch (16-64 images) in GPU memory. So the rule of thumb is use images about 224*224 for ImageNet-scale networks and about 96x96 for something smaller and easier.
I hope this clears your doubt !
Happy Learning !