How to decide that upto which layer we want out model to bee non-trainable

If instead of just doing back-Propagation upto last layer convolution layer, if i reduce the number of non-trainable layer ie add more conv2D layers for back-propagation, what could be its effect, How can i decide that upto which layer my model should be non-trainable?

hey @saksham_thukral ,
generally while working on transfer learning , we keep our base model as not trainable and use the pre trained weights like from imagenet or any other available.

But still if someone wants to configure it and decide which all layers should be trainable or non-trainable ,then he/she have to try number of experiments to tune and find the correct number of layers to be trained and not trained. There is no such theory or reason which layer should be trained and which shouldn’t be. You need to try every combination that comes in your mind to get it done.

I hope this would have resolved your doubt.
Thank You.
Happy coding :slightly_smiling_face: .