Loss is calculated as NaN

Hi! I am getting the loss calculation as NaN for each epoch. Not sure what is the exact issue. Please help me finding the error. Code is present at https://ide.codingblocks.com/s/306513

hey @tisandas2011 ,
I just have a look on your code.
The actual reason for the problem you are facing your final output activation function.

Lets take an example , that at a given your output needs to be somewhat 330.
But as you are using sigmoid , so it will be between -1 to 1. Means error will come to be around 329 to 331 for a single sample. and as you are using a batch size of 32.
So total would become huge. And as being this error so huge , the model isn’t able to learn things and from the first epoch itself it starts getting worse. and suddenly your loss value becomes so huge , that can be saved in the required datatype and hence you get nan as output.

So to correct you can use either of the two methods :

  1. Change Your output activation from Sigmoid to tanh or linear. Though you will need to tune your model a lot.
  2. Keep the same model , but normalize your output values using MinMaxScaler or any other texhnique. and after prediction just inverse transform them into there actual value range.

Both can help you , but in 1 option you need to work on model more.
where as in second you need to work on data more.

I hope this helped you.
Thank You :slightly_smiling_face:

Thanks Prashant for the explanation!

Hi! By following the changes mentioned it accuracy is showing around -464%. How can the model be tuned more? The updated code is present at https://ide.codingblocks.com/s/311925

hey @tisandas2011 ,
Can you please share me a link to these datasets, so that i can try them and let you know what changes you need to make.

Hi! Please find the following three links:

Hi! Please use the link to access the shared folder where the three CSV files are included: https://drive.google.com/drive/folders/1UOz-mNQV5-RMIQv6tuTF701xRK5dpQSK?usp=sharing

Please let me know in case you are not able to download the folder

hey @tisandas2011 ,
i am really sorry that it took your so much time.
Have a look at my code , https://colab.research.google.com/drive/1u488NTpwB0LKGbYDnUslws10ALaEIHbm?usp=sharing.

I was achieving a score of 21 with this.
I know it can be achieved much better , but with small dataset on neural networks , i don’t guess so.

As you know , with deep learning you need a lot of data. With such dataset , i would recommend you to work with simpler machine learning algorithms. They can learn it much better with such dataset.
Or else if you want , you can work on my code and improve it more further.

I hope this helps.
Thank You :slightly_smiling_face:.