RETURNN - RWTH extensible training framework for universal recurrent neural networks, is a Theano/TensorFlow-based implementation of modern recurrent neural network architectures. It is optimized for fast and reliable training of recurrent neural networks in a multi-GPU environment. Features include: Mini-batch training of feed-forward neural networks; Sequence-chunking based batch training for recurrent; neural networks; Long short-term memory recurrent neural networks including our own fast CUDA kernel; Multidimensional LSTM (GPU only, there is no CPU version); Memory management for large data sets; Work distribution across multiple devices; Flexible and fast architecture which allows all kinds of encoder-attention-decoder models.