Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations. We propose zoneout, a novel method for regularizing RNNs. At each timestep, zoneout stochastically forces some hidden units to maintain their previous values. Like dropout, zoneout uses random noise to train a pseudo-ensemble, improving generalization. But by preserving instead of dropping hidden units, gradient information and state information are more readily propagated through time, as in feedforward stochastic depth networks. We perform an empirical investigation of various RNN regularizers, and find that zoneout gives significant performance improvements across tasks. We achieve competitive results with relatively simple models in character- and word-level language modelling on the Penn Treebank and Text8 datasets, and combining with recurrent batch normalization yields state-of-the-art results on permuted sequential MNIST.
Keywords for this software
References in zbMATH (referenced in 5 articles , 1 standard article )
Showing results 1 to 5 of 5.
- Liang, Senwei; Khoo, Yuehaw; Yang, Haizhao: Drop-activation: implicit parameter reduction and harmonious regularization (2021)
- Gulcehre, Caglar; Chandar, Sarath; Cho, Kyunghyun; Bengio, Yoshua: Dynamic neural Turing machine with continuous and discrete addressing schemes (2018)
- Victor Campos, Brendan Jou, Xavier Giro-i-Nieto, Jordi Torres, Shih-Fu Chang: Skip RNN: Learning to Skip State Updates in Recurrent Neural Networks (2017) arXiv
- Dan Hendrycks, Kevin Gimpel: Gaussian Error Linear Units (GELUs) (2016) arXiv
- David Krueger, Tegan Maharaj, János Kramár, Mohammad Pezeshki, Nicolas Ballas, Nan Rosemary Ke, Anirudh Goyal, Yoshua Bengio, Aaron Courville, Chris Pal: Zoneout: Regularizing RNNs by Randomly Preserving Hidden Activations (2016) arXiv