Naive bayes for text classification

Sir,
In the denominator of P(xi | y = c) , you have written sum of count of each words in vocab in class C
Shouldn’t it be just the probability of P( y= c ) ?

Hey @vineetchanana, no when we talk about P(A|B) then it mean probability of occurrence of A when B has already occurred. So when we calculate P(xi|y=c) then it means, y=c has already occurred, means we have extracted all example where class or y = c. So after this we only need to calculate probability of xi word. Since probability is always calculated using formula number of occurrence divided by total number of items. So here for denominator, total number of items is the dictionary/vocab size from all examples having class = c

Hope this resolved your doubt. :blush:

I hope I’ve cleared your doubt. I ask you to please rate your experience here
Your feedback is very important. It helps us improve our platform and hence provide you
the learning experience you deserve.

On the off chance, you still have some questions or not find the answers satisfactory, you may reopen
the doubt.