LSTM

The human brain is a recurrent neural network (RNN): a network of neurons with feedback connections. It can learn many behaviors / sequence processing tasks / algorithms / programs that are not learnable by traditional machine learning methods. This explains the rapidly growing interest in artificial RNNs for technical applications: general computers which can learn algorithms to map input sequences to output sequences, with or without a teacher. They are computationally more powerful and biologically more plausible than other adaptive approaches such as Hidden Markov Models (no continuous internal states), feedforward networks and Support Vector Machines (no internal states at all). Our recent applications include adaptive robotics and control, handwriting recognition, speech recognition, keyword spotting, music composition, attentive vision, protein analysis, stock market prediction, and many other sequence problems.


References in zbMATH (referenced in 28 articles , 1 standard article )

Showing results 1 to 20 of 28.
Sorted by year (citations)

1 2 next

  1. Kruijne, Wouter; Bohte, Sander M.; Roelfsema, Pieter R.; Olivers, Christian N. L.: Flexible working memory through selective gating and attentional tagging (2021)
  2. Perla, Francesca; Richman, Ronald; Scognamiglio, Salvatore; Wüthrich, Mario V.: Time-series forecasting of mortality rates using deep learning (2021)
  3. Zhang, Pin; Yin, Zhen-Yu: A novel deep learning-based modelling strategy from image of particles to mechanical properties for granular materials with CNN and BiLSTM (2021)
  4. Heider, Yousef; Wang, Kun; Sun, WaiChing: (\mathrmSO(3))-invariance of informed-graph-based deep neural network for anisotropic elastoplastic materials (2020)
  5. Willmott, Devin; Murrugarra, David; Ye, Qiang: Improving RNA secondary structure prediction via state inference with deep recurrent neural networks (2020)
  6. Xu, Jiayang; Duraisamy, Karthik: Multi-level convolutional autoencoder networks for parametric prediction of spatio-temporal dynamics (2020)
  7. Fernández-González, Daniel; Gómez-Rodríguez, Carlos: Faster shift-reduce constituent parsing with a non-binary, bottom-up strategy (2019)
  8. Yu, Yong; Si, Xiaosheng; Hu, Changhua; Zhang, Jianxun: A review of recurrent neural networks: LSTM cells and network architectures (2019)
  9. Aggarwal, Charu C.: Neural networks and deep learning. A textbook (2018)
  10. Cinar, Goktug T.; Sequeira, Pedro M. N.; Principe, Jose C.: Hierarchical linear dynamical systems for unsupervised musical note recognition (2018)
  11. Fischer, Thomas; Krauss, Christopher: Deep learning with long short-term memory networks for financial market predictions (2018)
  12. Zhu, Henghui; Paschalidis, Ioannis Ch.; Hasselmo, Michael E.: Neural circuits for learning context-dependent associations of stimuli (2018)
  13. Er, Meng Joo; Zhang, Yong; Wang, Ning; Pratama, Mahardhika: Attention pooling-based convolutional neural network for sentence modelling (2016)
  14. Schmidhuber, Jürgen: Deep learning in neural networks: an overview (2015) ioport
  15. Graves, Alex: Supervised sequence labelling with recurrent neural networks. (2012)
  16. Namikawa, Jun; Tani, Jun: Learning to imitate stochastic time series in a compositional way by chaos (2010)
  17. Liwicki, Marcus; Bunke, Horst: Combining diverse on-line and off-line systems for handwritten text line recognition (2009)
  18. Namikawa, Jun; Tani, Jun: A model for learning to segment temporal sequences, utilizing a mixture of RNN experts together with adaptive variance (2008)
  19. Kara, Sadik; Okandan, Mustafa: Atrial fibrillation classification with artificial neural networks (2007)
  20. Skowronski, Mark D.; Harris, John G.: Automatic speech recognition using a predictive echo state network classifier (2007)

1 2 next