Gaussian Error Linear Units (GELUs). We propose the Gaussian Error Linear Unit (GELU), a high-performing neural network activation function. The GELU activation function is xΦ(x), where Φ(x) the standard Gaussian cumulative distribution function. The GELU nonlinearity weights inputs by their value, rather than gates inputs by their sign as in ReLUs (x1x>0). We perform an empirical evaluation of the GELU nonlinearity against the ReLU and ELU activations and find performance improvements across all considered computer vision, natural language processing, and speech tasks.
Keywords for this software
References in zbMATH (referenced in 5 articles , 1 standard article )
Showing results 1 to 5 of 5.
- Kratsios, Anastasis; Hyndman, Cody: NEU: a meta-algorithm for universal UAP-invariant feature representation (2021)
- Wu, Mike; Parbhoo, Sonali; Hughes, Michael C.; Roth, Volker; Doshi-Velez, Finale: Optimizing for interpretability in deep neural networks with tree regularization (2021)
- E, Weinan; Ma, Chao; Wu, Lei: Machine learning from a continuous viewpoint. I (2020)
- Yixiao Chen, Linfeng Zhang, Han Wang, Weinan E: DeePKS-kit: a package for developing machine learning-based chemically accurate energy and density functional models (2020) arXiv
- Dan Hendrycks, Kevin Gimpel: Gaussian Error Linear Units (GELUs) (2016) arXiv