Hi
In this video when instructor computed accuracy via SVM model (without best parameters) he got accuracy around 44.8 % but when I did the same thing with same mnist data I got accuracy of 96.32% , how such large variation is possible on same data & technique ?
Moreover when instructor computed best estimators via GridSearchCV then in best estimator he got C=0.1 & kernel=‘poly’ but doing the same thing I got C=5.0 & kernel=‘rbf’ ? can you explain me how I’m getting such variations when my techniques & data are exactly same as of instructor .
Thank You
Doubt regarding accuracy
hey @pasta,
Our score depends upon parameters and data we feed into it.
So the first thing is just about that only , you might be have trained with different default parameters or data , hence you got different results. I would suggest to ignore that.
And about the GridSearch , it alsoa random boostraping technique, hence the data it is being taken to score is different. Hence you got different best parameters. But the final score or performance will be somewhat or nearby to parameters you get by checking it different times.
These techniques works randomly by default in there implementation. Hence , to get them almost similar to a single value for testing , we use KFold cross validation. It helps us to identify a mean of performance of our data over different testing sets.
I hope this helps You .