• LSTM

  • Referenced in 24 articles [sw03373]
  • The human brain is a recurrent neural network...
  • GNMT

  • Referenced in 11 articles [sw26579]
  • issues. Our model consists of a deep LSTM network with 8 encoder and 8 decoder...
  • Lasagne

  • Referenced in 6 articles [sw20936]
  • recurrent networks including Long Short-Term Memory (LSTM), and any combination thereof. Allows architectures...
  • LSTMVis

  • Referenced in 3 articles [sw27157]
  • particular long short-term memory (LSTM) networks, are a remarkably effective tool for sequence modeling...
  • CURRENNT

  • Referenced in 1 article [sw12814]
  • bidirectional RNNs with Long Short-Term Memory (LSTM) memory cells which overcome the vanishing gradient ... first publicly available parallel implementation of deep LSTM-RNNs. Benchmarks are given on a noisy ... CHiME Speech Separation and Recognition Challenge, where LSTM-RNNs have been shown to deliver best ... result, double digit speedups in bidirectional LSTM training are achieved with respect to a reference...
  • Bio-LSTM

  • Referenced in 1 article [sw25897]
  • LSTM: A Biomechanically Inspired Recurrent Neural Network for 3D Pedestrian Pose and Gait Prediction ... biomechanically inspired recurrent neural network (Bio-LSTM) that can predict the location and 3D articulated...
  • N3LDG

  • Referenced in 1 article [sw30284]
  • execute computation graphs when training CNN, Bi-LSTM, and Tree-LSTM. When using ... using GPU to train CNN and Tree-LSTM, N3LDG is better than PyTorch...
  • VideoLSTM

  • Referenced in 1 article [sw30438]
  • video medium. Starting from the soft-Attention LSTM, VideoLSTM makes three novel contributions. First, video ... hardwire convolutions in the soft-Attention LSTM architecture. Second, motion not only informs us about ... combined architecture. It compares favorably against other LSTM architectures for action classification and especially action...
  • DeepSleepNet

  • Referenced in 2 articles [sw21053]
  • bidirectional Long Short- Term Memory (bidirectional-LSTM) to learn transition rules among sleep stages from...
  • RETURNN

  • Referenced in 2 articles [sw26580]
  • including our own fast CUDA kernel; Multidimensional LSTM (GPU only, there is no CPU version...
  • AntisymmetricRNN

  • Referenced in 2 articles [sw27774]
  • much more predictable dynamics. It outperforms regular LSTM models on tasks requiring long-term memory...
  • Synaptic

  • Referenced in 1 article [sw27138]
  • perceptrons, multilayer long-short term memory networks (LSTM), liquid state machines or Hopfield networks ... Derek D. Monner’s paper: A generalized LSTM-like training algorithm for second-order recurrent...
  • char-rnn

  • Referenced in 1 article [sw27212]
  • implements multi-layer Recurrent Neural Network (RNN, LSTM, and GRU) for training/sampling from character-level ... additionally: allows for multiple layers, uses an LSTM instead of a vanilla RNN, has more...
  • rna-state-inf

  • Referenced in 1 article [sw33162]
  • nonlocal dependencies. Bidirectional long short-term memory (LSTM) neural networks have emerged as a powerful ... inference. State predictions from a deep bidirectional LSTM are used to generate synthetic SHAPE data...
  • TPA-LSTM

  • Referenced in 1 article [sw34694]
  • Temporal pattern attention for multivariate time series forecasting...
  • TFLearn

  • Referenced in 1 article [sw21054]
  • recent deep learning models, such as Convolutions, LSTM, BiRNN, BatchNorm, PReLU, Residual networks, Generative networks...
  • DeepRT

  • Referenced in 1 article [sw22012]
  • Network (ResNet) and Long Short-Term Memory (LSTM). In contrast to the traditional predictor based...
  • SLING

  • Referenced in 1 article [sw22022]
  • transition-based, neural-network parsing with bidirectional LSTM input encoding and a Transition Based Recurrent...
  • NCRF++

  • Referenced in 1 article [sw26488]
  • neural sequence labeling models such as LSTM-CRF, facilitating reproducing and refinement on those methods...
  • ConvS2S

  • Referenced in 1 article [sw26536]
  • outperform the accuracy of the deep LSTM setup of Wu et al. (2016) on both...