Bayes’ Theorem
Let’s start from Bayes’ theorem, also referred as Bayes’ law or Bayes’ rule.
P(A|B) = P(B, A) / P(B)
= P(B|A) * P(A) / P(B)
= P(B|A) * P(A) / (P(B|A) * P(A) + P(B|^A) * P(^A))
P(A): prior probability. It’s the probability event A happens.
P(^A): the probability that event A not happen.
P(B): evidence, or background. The probability of event B happens.
P(B|A), P(B|^A): conditional probability, or likelihood. The probability of event B happens given A happened or not happened respectively.
P(A|B): posterior probability. The probability of A happens taking into account B for and against A.
Naive Bayes
When used for classification, Bayes’ Theorem can be expressed as below,
P(C|F1, F2, … Fn) = P(C, F1, F2, …, Fn) / P(F1, F2, … , Fn)
= P(F1, F2, … Fn|C) * P(C) / P(F1, F2, …, Fn)
C is some class/label we can classify a sample into, and F1, F2, … Fn represents features of the sample data.
P(F1, F2, …, Fn) doesn’t depend on C and are normally given or can be calculated based on probability of each feature. It’s effectively a constant and can be ignored for classification purpose.
The numerator can be expressed as following,
P(C, F1, F2 … , Fn)
= P(C) * P(F1, F2, … , Fn|C)
= P(C) * P(F1 | C) * P(F2, F3, … Fn | F1, C)
= P(C) * P(F1 | C) * P(F2 | F1, C) * P(F3, … Fn | F1, F2, C)
…
= P(C) * P(F1 | C) * P(F2 | F1, C) * P(F3 | F1, F2, C) * … * P(Fn | F1, F2, …, Fn-1, C)
In Naive Bayes, all features are assumed to be independent. Thus Fi is independent from every other feature Fj where j != i. Therefore we have
P(F2 | F1, C) = P(F2 | C)
P(F3 | F1, F2, C) = P(F3 | C)
…
P(Fn | F1, F2, … Fn-1, C) = P(Fn | C)
Thus,
P(C, F1, F2 … , Fn) = P(C) * P(F1 | C) * P(F2 | C) * P(F3 | C), …, P(Fn | C)
For example, two authors A and B like to use words “love”, “life” and “money”. The probability of these words appears in A’s article is 0.1, 0.1 and 0.8, and in B’s as 0.5, 0.3 and 0.2. Now we have the phrase “love life”, which one of the author is more likely to have written that?
Without any information, there’s 50% percent probability for either A or B. Assuming the words are independent features, we can use Naive Bayes.
P(A | love, life) = P(A) * P(love | A) * P(life | A) / P(love, life) = 0.5 * 0.1 * 0.1 / P(love, life)
P(B | love, life) = P(B) * P(love | B) * P(life | B) / P(love, life) = 0.5 * 0.5 * 0.3 / P(love, life)
Clearly, it’s more likely that the phrase “love life” is written by author B. Note that P(love, life) is independent from the authors and just a scaling factor.
References:
- Bayes’ theorem: http://en.wikipedia.org/wiki/Bayes%27_theorem
- Naive Bayes classifier: http://en.wikipedia.org/wiki/Naive_Bayes_classifier
- Udacity, Intro to Machine Learning. Lesson 1. Naive Bayes.