Its just a way of checking. We can’t say as for now , that that feature , even having feature_importance_ less than 0.1 , might be useful. So you just need to try it and check how it works.
Try using some advanced models , like LIGHTGBM, XGBOOST or CATBOOST , they are highly effective if provided proper tuning.
Although , you can’t get 100 score , as those are our mentors who have scored that while testing this system.
There also is a concept OOF, in which while we perform cross validation for example for 5 folds.
In each fold , We predict on test data and average it on all folds.
This has always worked much better than others.
You can give it a try. Hope it works.