Why are we passing input caption to Embedding layer in this model. We have already made and preprocessed our captions and changed them to glove embeddings. We can directly pass the glove embedding of the words to next layer without using embedding layers
Embedding layer use
Hey @Par1hsharma,
Yes you can directly pass them also.
here we are training our to generate to those embeddings also , hence we need that layer to be there , else you can try using glove and skip that layer.
Give it a shot and see how the results are changing.