Not able to determine optimum values for the number of neurons in layers

https://colab.research.google.com/drive/16prrf-XMy-QjbAvqUpbdCV2XRjBLg6uY

Hi I had made the thinking that we should first increase the dimensionality of the data and then reduce it to the size of output gradually.

But after incorparating changes as told by you

model.compile(optimizer=‘rmsprop’,loss=‘mse’,metrics=[‘mae’])

I am not able to determine …how much loss i can tolerate.
Also my model is taking alot of eochs to reduce the error.

So certainly I am doing something wrong.

Kindly correct me [Code link attached]

Also , ValueError: y contains previously unseen labels: ‘RRAe’

How to handle the scenario …when we encode the Xtest, we encounter some new value which was not stored in the dictionary maintained by label encoder

Hey @chiragwxN, i have checked your jupyter this seems fine, also no doubt model will take time to reduce the error, also the optimal value of loss varies from, project to project, so generally we keep on training model unless validation loss decreases. Neither there is any way to infer optimal number of nodes.

Try one hot encoding for features, with that whenever a new feature appears in x_test, it will not pose a problem you could put all them by zero.

Also don’t expect from us we will be sharing solutions for you, than ofcourse it would not be a challenge problem.

Hope this resolved your doubt.
Plz mark it as resolved in my doubts section. :blush: