AdaBoost.MH

A decision-theoretic generalization of on-line learning and an application to boosting. In the first part of the paper we consider the problem of dynamically apportioning resources among a set of options in a worst-case on-line framework. The model we study can be interpreted as a broad, abstract extension of the well-studied on-line prediction model to a general decision-theoretic setting. We show that the multiplicative weight-update Littlestone-Warmuth rule can be adapted to this model, yielding bounds that are slightly weaker in some cases, but applicable to a considerably more general class of learning problems. We show how the resulting learning algorithm can be applied to a variety of problems, including gambling, multiple-outcome prediction, repeated games, and prediction of points in $bfR^n$. In the second part of the paper we apply the multiplicative weight-update technique to derive a new boosting algorithm. This boosting algorithm does not require any prior knowledge about the performance of the weak learning algorithm. We also study generalizations of the new boosting algorithm to the problem of learning functions whose range, rather than being binary, is an arbitrary finite set or a bounded segment of the real line.


References in zbMATH (referenced in 456 articles , 1 standard article )

Showing results 1 to 20 of 456.
Sorted by year (citations)

1 2 3 ... 21 22 23 next

  1. Aria, Massimo; D’Ambrosio, Antonio; Iorio, Carmela; Siciliano, Roberta; Cozza, Valentina: Dynamic recursive tree-based partitioning for malignant melanoma identification in skin lesion dermoscopic images (2020)
  2. Cappozzo, Andrea; Greselin, Francesca; Murphy, Thomas Brendan: A robust approach to model-based classification based on trimming and constraints. Semi-supervised learning in presence of outliers and label noise (2020)
  3. Connamacher, Harold; Pancha, Nikil; Liu, Rui; Ray, Soumya: \textscRankboost(+): an improvement to \textscRankboost (2020)
  4. Fan, Jun; Xiang, Dao-Hong: Quantitative convergence analysis of kernel based large-margin unified machines (2020)
  5. Fujita, Takahiro; Hatano, Kohei; Takimoto, Eiji: Boosting over non-deterministic ZDDs (2020)
  6. Hung, Ying-Chao; Michailidis, George; PakHai Lok, Horace: Locating infinite discontinuities in computer experiments (2020)
  7. Lai, Yuanhao; McLeod, Ian: Ensemble quantile classifier (2020)
  8. Lavrač, Nada; Škrlj, Blaž; Robnik-Šikonja, Marko: Propositionalization and embeddings: two sides of the same coin (2020)
  9. Lopes, Miles E.: Estimating a sharp convergence bound for randomized ensembles (2020)
  10. Lu, Haihao; Mazumder, Rahul: Randomized gradient boosting machine (2020)
  11. Nguyen, Thi Thanh Sang; Do, Pham Minh Thu: Classification optimization for training a large dataset with naïve Bayes (2020)
  12. Ruehle, Fabian: Data science applications to string theory (2020)
  13. Tan, Zhi-Hao; Tan, Peng; Jiang, Yuan; Zhou, Zhi-Hua: Multi-label optimal margin distribution machine (2020)
  14. van Engelen, Jesper E.; Hoos, Holger H.: A survey on semi-supervised learning (2020)
  15. Wang, Jie; Wang, Bo; Liang, Jing; Yu, Kunjie; Yue, Caitong; Ren, Xiangyang: Ensemble learning via multimodal multiobjective differential evolution and feature selection (2020)
  16. Yang, Wenzhuo; Sim, Melvyn; Xu, Huan: Goal scoring, coherent loss and applications to machine learning (2020)
  17. Zhao, Peng; Cai, Le-Wen; Zhou, Zhi-Hua: Handling concept drift via model reuse (2020)
  18. Zhu, Qiwu; Xiong, Qingyu; Wang, Kai; Lu, Wang; Liu, Tong: Accurate WiFi-based indoor localization by using fuzzy classifier and mlps ensemble in complex environment (2020)
  19. Agrawal, Shipra; Devanur, Nikhil R.: Bandits with global convex constraints and objective (2019)
  20. Aravkin, Aleksandr Y.; Bottegal, Giulio; Pillonetto, Gianluigi: Boosting as a kernel-based method (2019)

1 2 3 ... 21 22 23 next