Boosting
From Wikipedia, the free encyclopedia
Boosting is a machine learning meta-algorithm for performing supervised learning. Boosting is based on the question posed by Kearns[1]: can a set of weak learners create a single strong learner? A weak learner is defined to be a classifier which is only slightly correlated with the true classification. In contrast, a strong learner is a classifier that is arbitrarily well correlated with the true classification.
The affirmative answer to Kearns' question has significant ramifications in machine learning and statistics.
Contents |
[edit] Boosting algorithms
While boosting is not algorithmically constrained, most boosting algorithms consist of iteratively learning weak classifiers with respect to a distribution and adding them to a final strong classifier. When they are added, they are typically weighted in some way that is usually related to the weak learners' accuracy. After a weak learner is added, the data is reweighted: examples that are misclassified gain weight and examples that are classified correctly lose weight (some boosting algorithms actually decrease the weight of repeatedly misclassified examples, e.g., boost by majority and BrownBoost). Thus, future weak learners focus more on the examples that previous weak learners misclassified.
There are many boosting algorithms. The original ones, proposed by Robert Schapire (a recursive majority gate formulation [2]) and Yoav Freund (boost by majority [3]), were not adaptive and could not take full advantage of the weak learners.
Only algorithms that are provable boosting algorithms in the probably approximately correct learning formulation are boosting algorithms. Other algorithms that are similar in spirit to boosting algorithms are sometimes called "leveraging algorithms", although they are also sometimes incorrectly called boosting algorithms.[4]
[edit] Examples of boosting algorithms
The main variation between many boosting algorithms is their method of weighting training data points and hypotheses. AdaBoost is very popular and perhaps the most significant historically as it was the first algorithm that could adapt to the weak learners. However, there are many more recent algorithms such as LPBoost, TotalBoost, BrownBoost,MadaBoost, LogitBoost, and others. Many boosting algorithms fit into the AnyBoost framework,[5] which shows that boosting performs gradient descent in function space using a convex cost function.
[edit] See also
[edit] References
[edit] Footnotes
- ^ Michael Kearns. Thoughts on hypothesis boosting. Unpublished manuscript. 1988
- ^ Rob Schapire. Strength of Weak Learnability. Journal of Machine Learning Vol. 5, pages 197-227. 1990
- ^ Yoav Freund. Boosting a weak learning algorithm by majority. Proceedings of the Third Annual Workshop on Computational Learning Theory. 1990
- ^ Nir Krause and Yoram Singer. Leveraging the margin more carefully. In Proceedings of the International Conference on Machine Learning (ICML), 2004.
- ^ Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean. Boosting algorithms as gradient descent. In S.A. Solla, T.K. Leen, and K.-R. Muller, editors, Advances in Neural Information Processing Systems 12, pages 512--518. MIT Press, 2000
[edit] Notations
- Yoav Freund and Robert E. Schapire A decision-theoretic generalization of on-line learning and an application to boosting. Journal of Computer and System Sciences, 55(1):119--139, 1997. http://www.cse.ucsd.edu/~yfreund/papers/adaboost.pdf
- Robert E. Schapire and Yoram Singer. Improved Boosting Algorithms Using Confidence-Rated Predictors. Machine Learning, 37(3):297--336, 1999. http://citeseer.ist.psu.edu/schapire99improved.html