Keras

Keras: Deep Learning library for Theano and TensorFlow. Keras is a minimalist, highly modular neural networks library, written in Python and capable of running on top of either TensorFlow or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible delay is key to doing good research. Use Keras if you need a deep learning library that: allows for easy and fast prototyping (through total modularity, minimalism, and extensibility). supports both convolutional networks and recurrent networks, as well as combinations of the two. supports arbitrary connectivity schemes (including multi-input and multi-output training). runs seamlessly on CPU and GPU. Read the documentation at Keras.io. Keras is compatible with: Python 2.7-3.5.


References in zbMATH (referenced in 89 articles )

Showing results 81 to 89 of 89.
Sorted by year (citations)

previous 1 2 3 4 5

  1. Cagli, Eleonora; Dumas, Cécile; Prouff, Emmanuel: Convolutional neural networks with data augmentation against jitter-based countermeasures. Profiling attacks without pre-processing (2017)
  2. Dery, Lucio Mwinmaarong; Nachman, Benjamin; Rubbo, Francesco; Schwartzman, Ariel: Weakly supervised classification in high energy physics (2017)
  3. Hao Dong, Akara Supratak, Luo Mai, Fangde Liu, Axel Oehmichen, Simiao Yu, Yike Guo: TensorLayer: A Versatile Library for Efficient Deep Learning Development (2017) arXiv
  4. Jonas Rauber, Wieland Brendel, Matthias Bethge: Foolbox v0.8.0: A Python toolbox to benchmark the robustness of machine learning models (2017) arXiv
  5. Komiske, Patrick T.; Metodiev, Eric M.; Schwartz, Matthew D.: Deep learning in color: towards automated quark/gluon jet discrimination (2017)
  6. Ryan R. Curtin, Shikhar Bhardwaj, Marcus Edel, Yannis Mentekidis: A generic and fast C++ optimization framework (2017) arXiv
  7. Swaddle, Michael; Noakes, Lyle; Smallbone, Harry; Salter, Liam; Wang, Jingbo: Generating three-qubit quantum circuits with neural networks (2017)
  8. Dustin Tran, Alp Kucukelbir, Adji B. Dieng, Maja Rudolph, Dawen Liang, David M. Blei: Edward: A library for probabilistic modeling, inference, and criticism (2016) arXiv
  9. Patrick Doetsch, Albert Zeyer, Paul Voigtlaender, Ilya Kulikov, Ralf Schlüter, Hermann Ney: RETURNN: The RWTH Extensible Training framework for Universal Recurrent Neural Networks (2016) arXiv

previous 1 2 3 4 5