I was trying to build face Expression using transfer learning, the problem that stuck in my mind is , we say that take a pretrained model, it will provide features, which will become input to fully connected layer that will provide prediction, but if mlp provides prediction, that means we will have to train mlp also so how transfer learniing is helping in it?
Transfer learning
Transfer learning is best helpful when you lack amount of training data with you but still you want your model to be powerful. Suppose for an example, you want to train an image classifier having 10 classes (say… dog, cat, horse …) but you have only handful of data. Now you want your model to be robust but you lack more training data.
So, now the idea comes here is to use “Imagenet Weights” for you “ResNet” model and apply transfer learning. Actually these weights(“Imagenet”) were buit to classify 1000 classes, so these weights are so much efficient that they are capable of capturing all features/patterns/trends in data and classify accordingly. Since our task is to classify for 10 classes, we can add fully connected layers at end and then freeze some layers and fine tune others.
So this is like how transfer learning helps us in building robust trustworthy models, which seems like a milestone in case of less amount of training data.
I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.
On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.