18.16 Naive Bayes

Representation Method Measure
Probabilities

Consider classifying input data by determining the probability of an event happening given the probability of another event that has already occured. Class and conditional probabilities are calculated to determine the likelihood of an event.

Na"ive Bayes is a simple and effective machine learning classifier based on Bayes’ Theorem:

\[ P(A|B) = {{P(B|A) * P(A)}\over{P(B)}} \]

The class probabilities are \(P(A)\) (the probability of event \(A\)) and \(P(B)\). The conditional probabilities are \(P(A|B)\) (the probability of \(A\) happening given that \(B\) occurs) and \(P(B|A)\).

A Na"ive Bayes classifier uses these probabilities to classify independent features that have the same weight or importance from the input dataset to make predictions.

The idea is simple and can be applied to small datasets. It suffers from the zero frequency problem where a class with probability of 0 for a selected feature, and hence its conditional probability is then also 0 and so excluded from further consideration. Laplace smoothing assigns the class a small probability.

The concept also assumes independence of the features and when not met will perform poorly.



Your donation will support ongoing availability and give you access to the PDF version of this book. Desktop Survival Guides include Data Science, GNU/Linux, and MLHub. Books available on Amazon include Data Mining with Rattle and Essentials of Data Science. Popular open source software includes rattle, wajig, and mlhub. Hosted by Togaware, a pioneer of free and open source software since 1984. Copyright © 1995-2022 Graham.Williams@togaware.com Creative Commons Attribution-ShareAlike 4.0