LIBSVM

LIBSVM is a library for Support Vector Machines (SVMs). We have been actively developing this package since the year 2000. The goal is to help users to easily apply SVM to their applications. LIBSVM has gained wide popularity in machine learning and many other areas. In this article, we present all implementation details of LIBSVM. Issues such as solving SVM optimization problems theoretical convergence multiclass classification probability estimates and parameter selection are discussed in detail: http://dl.acm.org/citation.cfm?id=1961199


References in zbMATH (referenced in 1136 articles )

Showing results 1 to 20 of 1136.
Sorted by year (citations)

1 2 3 ... 55 56 57 next

  1. Bian, Fengmiao; Liang, Jingwei; Zhang, Xiaoqun: A stochastic alternating direction method of multipliers for non-smooth and non-convex optimization (2021)
  2. Blanchard, Gilles; Deshmukh, Aniket Anand; Dogan, Urun; Lee, Gyemin; Scott, Clayton: Domain generalization by marginal transfer learning (2021)
  3. Brust, Johannes J.; Di, Zichao (Wendy); Leyffer, Sven; Petra, Cosmin G.: Compact representations of structured BFGS matrices (2021)
  4. Burkina, M.; Nazarov, I.; Panov, M.; Fedonin, G.; Shirokikh, B.: Inductive matrix completion with feature selection (2021)
  5. Galvan, Giulio; Lapucci, Matteo; Lin, Chih-Jen; Sciandrone, Marco: A two-level decomposition framework exploiting first and second order information for SVM training problems (2021)
  6. Gower, Robert M.; Richtárik, Peter; Bach, Francis: Stochastic quasi-gradient methods: variance reduction via Jacobian sketching (2021)
  7. Günlük, Oktay; Kalagnanam, Jayant; Li, Minhan; Menickelly, Matt; Scheinberg, Katya: Optimal decision trees for categorical data via integer programming (2021)
  8. Han, Biao; Shang, Chao; Huang, Dexian: Multiple kernel learning-aided robust optimization: learning algorithm, computational tractability, and usage in multi-stage decision-making (2021)
  9. Hanzely, Filip; Richtárik, Peter; Xiao, Lin: Accelerated Bregman proximal gradient methods for relatively smooth convex optimization (2021)
  10. Iiduka, Hideaki: Inexact stochastic subgradient projection method for stochastic equilibrium problems with nonmonotone bifunctions: application to expected risk minimization in machine learning (2021)
  11. Jahani, Majid; Gudapati, Naga Venkata C.; Ma, Chenxin; Tappenden, Rachael; Takáč, Martin: Fast and safe: accelerated gradient methods with optimality certificates and underestimate sequences (2021)
  12. Jiang, Gaoxia; Wang, Wenjian; Qian, Yuhua; Liang, Jiye: A unified sample selection framework for output noise filtering: an error-bound perspective (2021)
  13. Lei, Yunwen; Ying, Yiming: Stochastic proximal AUC maximization (2021)
  14. Li, Zhu; Ton, Jean-Francois; Oglic, Dino; Sejdinovic, Dino: Towards a unified analysis of random Fourier features (2021)
  15. Lu, Haihao; Freund, Robert M.: Generalized stochastic Frank-Wolfe algorithm with stochastic “substitute” gradient for structured convex optimization (2021)
  16. Masoudi, Babak; Daneshvar, Sabalan; Razavi, Seyed Naser: A multi-modal fusion of features method based on deep belief networks to diagnosis schizophrenia disease (2021)
  17. Mudunuru, M. K.; Karra, S.: Physics-informed machine learning models for predicting the progress of reactive-mixing (2021)
  18. Nakayama, Shummin; Narushima, Yasushi; Yabe, Hiroshi: Inexact proximal memoryless quasi-Newton methods based on the Broyden family for minimizing composite functions (2021)
  19. Rodomanov, Anton; Nesterov, Yurii: Greedy quasi-Newton methods with explicit superlinear convergence (2021)
  20. Sakai, Tomoya; Niu, Gang; Sugiyama, Masashi: Information-theoretic representation learning for positive-unlabeled classification (2021)

1 2 3 ... 55 56 57 next