Titanic survivor mini project

Hey!
I got 89.6% accuracy in my model which I made using DecisionTreeClassifier() and then used AdaBoost . But, still I got 42% scoring on the leaderboard. I have had these problems before also even though sometimes my code is almost similiar to the solution.
I cleaned the data, replaced the missing values using .fillna method, as taught by Prateek Sir.
Please help me figure out the problem because I am unable to do the same. Also, PLEASE CLARIFY on how our projects on Coding Blocks are scored on the leaderboard.

Following is my code:
dt = DecisionTreeClassifier(criterion=‘entropy’)
dt.fit(X_train, Y_train)
dt.score(X_train, Y_train)
y_pred = dt.predict(X_test)
from sklearn.ensemble import AdaBoostClassifier
ada = AdaBoostClassifier(base_estimator=dt, n_estimators=180, random_state=1)
ada.fit(X_train, Y_train)
yp = ada.predict(X_test)

I have predicted values using both the classifiers.
PLEASE HELP ASAP.

hi @geekayd

the results are being checked on the basis of ground truth of the problems , so dont worry about the scoring methods

try to predict the score on train data itself using ada.score() method

Sir, I always do the same.
This time ada.score(X_train, y_train) gave me this output :
0.8751238850346879

But, getting 68% in accuracy is not clear to me.
Please tell me what procedures are generally preferred ?

I used

  1. Data cleaning
  2. Replacing NaN values with .mean() method
  3. Label Encoding
  4. Fitting model using sklearn Random Forest Classifier
  5. Boosting (Adaboost)
  6. Predicting and finding out the accuracy

hi @geekayd
the order you mentioned is exactly right
but in this challenge boosting will be a overkill to the problem
i me to say boosting will lead to overfitting in our challenge
so please try to learn about overfitting