Regarding Transfer Learning

well i was using transfer learning on the ResNet50 model and after adding some layers from my own side.When i was trying to train my model and i was loading dataset batchwise using generators then while training it gives me warning that util.to_categorical can not be used properly why it is so ??

hey @amankharb ,
There are two methods to Generators to create training data in batches ,

  1. Use keras ImageDataGenerator , it aloows you perform augmentation as well as batching of dataset with either categorical output or a single output. But it can’t be used when you need to apply multiple inputs.

  2. Create a Custom Generator , Now its your choice how you want to retrieve the data , how you want to process it and pass to the your deep learning model and also what outputs do you want to apply for training.

Can you please provide a link to your code. So that i am to debug and get if there is any error or its just a mere warning .

Thank You and Happy Learning :laughing:.

here is the link of my code and the model that i am trying to create.

https://colab.research.google.com/drive/1uTrESpm7dgAIUPe5OkUNppyrwjrsAezi?usp=sharing

tell me why my model is not performing well

i am getting maximum accuracy of 30% with AlexNet architecture.It is because of the lesser number of dataset or what or i have

and when i am performing transfer learning that u can see in google colab for first 20 epochs the val_loss was fluctuating .Can u please analyze my code and tell me where i am wrong both before using transfer learning and after using transfer learning

hey @amankharb ,
Sorry to respond you so late , kindly provide access to above code file.

Thank You and Happy Learning :slightly_smiling_face:.

https://colab.research.google.com/drive/1uTrESpm7dgAIUPe5OkUNppyrwjrsAezi?usp=sharing

i have provided access to everyone please check

hey @amankharb ,
sorry for such late response,
there are some things you need to understand :

  1. In a convolution layer , number of filters matter a lot. Just don’t start with a large number of filters. Slowly increase them , this shows that your is learning and is able to to distinguish between the classes too.
  2. For classification tasks , you need to use either softmax or sigmoid activation function on the final output layer. Relu can be used but it considers the output as a numeri value whereas we want our model to predict the probability for each class.
  3. You need to understand the use of dropouts and dense together. And Also Pooling.
    As flatten layer increases a large number of nodes in a layer , resulting in low performance and high number of model parameters.

I have implemented your alexnet model architecture in a better way.
Have a look at this code https://colab.research.google.com/drive/1zgv6ytMLU2KGf_zHyEgRQP7QWOU2PybI?usp=sharing

I hope this helps you.