• AdaCost

  • Referenced in 26 articles [sw33192]
  • cost of misclassifications to update the training distribution on successive boosting rounds. The purpose ... bound of cumulative misclassification cost of the training set. Empirical evaluations have shown significant reduction...
  • SqueezeNet

  • Referenced in 8 articles [sw30749]
  • require less communication across servers during distributed training. (2) Smaller DNNs require less bandwidth...
  • Horizon

  • Referenced in 5 articles [sw31157]
  • includes data preprocessing, feature transformation, distributed training, counterfactual policy evaluation, optimized serving, and a model...
  • SimpleDet

  • Referenced in 3 articles [sw30738]
  • SimpleDet: A Simple and Versatile Distributed Framework for Object Detection and Instance Recognition. Object detection ... video surveillance and medical image analysis. However, training object detection models on large scale datasets ... detection framework called SimpleDet which enables the training of state-of-the-art detection models ... with best practice. SimpleDet also supports distributed training with near linear scaling...
  • GPDT

  • Referenced in 44 articles [sw04803]
  • solving the quadratic program arising in training support vector machines for classification problems is introduced ... computing resources available on multiprocessor systems, by distributing the heaviest computational tasks of each decomposition ... real-world data sets with millions training samples highlight how the software makes large scale...
  • RSGHB

  • Referenced in 4 articles [sw23112]
  • censored normal, and the Johnson SB distribution. Kenneth Train’s Matlab and Gauss code ... found here: http://elsa.berkeley.edu/Software/abstracts/train1006mxlhb.html See Train’s chapter on HB in Discrete Choice with ... paper on using HB with non-normal distributions here: http://eml.berkeley.edu// train/trainsonnier.pdf...
  • Horovod

  • Referenced in 2 articles [sw28748]
  • Horovod: fast and easy distributed deep learning in TensorFlow. Training modern deep learning models requires ... mess and stick with slower single-GPU training. In this paper we introduce Horovod ... user code, enabling faster, easier distributed training in TensorFlow. Horovod is available under the Apache...
  • Catalyst.RL

  • Referenced in 2 articles [sw31154]
  • library include large-scale asynchronous distributed training, easy-to-use configuration files with the complete ... frame stacking, n-step returns, value distributions, etc. To vindicate the usefulness of our framework ... capitalizing on the ability of catalyst.RL to train high-quality and sample-efficient RL agents...
  • Chiron

  • Referenced in 1 article [sw32566]
  • data outside the enclave. To support distributed training, Chiron executes multiple concurrent enclaves that exchange...
  • PaddleFL

  • Referenced in 1 article [sw34097]
  • federated learning system in large scale distributed clusters. In PaddleFL, serveral federated learning strategies will ... Application of traditional machine learning training strategies such as Multi-task learning, Transfer Learning ... Based on PaddlePaddle’s large scale distributed training and elastic scheduling of training...
  • TernGrad

  • Referenced in 1 article [sw22206]
  • well-known bottleneck of distributed training. In this work, we propose TernGrad that uses ternary...
  • CodeNet

  • Referenced in 1 article [sw30730]
  • CodeNet: Training Large Scale Neural Networks in Presence of Soft-Errors. This work ... proposes the first strategy to make distributed training of neural networks resilient to computing errors...
  • dna2vec

  • Referenced in 1 article [sw18696]
  • propose a novel method to train distributed representations of variable-length k-mers. Our method ... popular word embedding model word2vec, which is trained on a shallow two-layer neural network...
  • UGM

  • Referenced in 1 article [sw28257]
  • probabilities. Sampling: Generating samples from the distribution. Training: Fitting a model to a given dataset...
  • FedML

  • Referenced in 1 article [sw34090]
  • comparisons. FedML supports three computing paradigms (distributed training, mobile on-device training, and standalone simulation...
  • Joone

  • Referenced in 1 article [sw30673]
  • test any neural network, and a Distributed Training Environment to train in parallel mode many...
  • Train-o-matic

  • Referenced in 1 article [sw31647]
  • resource. Moreover, as the sense distribution in the training set is pivotal to boosting ... when the learned distributions are taken into account for generating the training sets, the performance ... show how our sense distribution learning techniques aid Train-O-Matic to scale well over ... training sets in 5 different languages and the sense distributions for each domain of SemEval...
  • HTK

  • Referenced in 15 articles [sw07937]
  • provide sophisticated facilities for speech analysis, HMM training, testing and results analysis. The software supports ... both continuous density mixture Gaussians and discrete distributions and can be used to build complex...
  • BlendTorch

  • Referenced in 1 article [sw35704]
  • help creating infinite streams of synthetic training data. BlendTorch generates data by massively randomizing ... fidelity simulations and takes care of distributing artificial training data for model learning in real...