Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research. T2T is actively used and maintained by researchers and engineers within the Google Brain team and a community of users. We’re eager to collaborate with you too, so feel free to open an issue on GitHub or send along a pull request (see our contribution doc). You can chat with us on Gitter and join the T2T Google Group.

References in zbMATH (referenced in 91 articles )

Showing results 21 to 40 of 91.
Sorted by year (citations)
  1. Wang, Dandan; Xu, Jinlan; Gao, Fei; Wang, Charlie C. L.; Gu, Renshu; Lin, Fei; Rabczuk, Timon; Xu, Gang: IGA-reuse-NET: a deep-learning-based isogeometric analysis-reuse approach with topology-consistent parameterization (2022)
  2. Waschkowski, Fabian; Zhao, Yaomin; Sandberg, Richard; Klewicki, Joseph: Multi-objective CFD-driven development of coupled turbulence closure models (2022)
  3. Wu, Shaoju; Zhao, Wei; Ji, Songbai: Real-time dynamic simulation for highly accurate spatiotemporal brain deformation from impact (2022)
  4. Abbasimehr, Hossein; Paki, Reza: Prediction of COVID-19 confirmed cases combining deep learning methods and Bayesian optimization (2021)
  5. Adewoyin, Rilwan A.; Dueben, Peter; Watson, Peter; He, Yulan; Dutta, Ritabrata: TRU-NET: a deep learning approach to high resolution prediction of rainfall (2021)
  6. Bakhtin, Anton; Deng, Yuntian; Gross, Sam; Ott, Myle; Ranzato, Marc’aurelio; Szlam, Arthur: Residual energy-based models for text (2021)
  7. Bengio, Yoshua; Lodi, Andrea; Prouvost, Antoine: Machine learning for combinatorial optimization: a methodological tour d’horizon (2021)
  8. Chen, Chuangtao; He, Zhimin; Huang, Zhiming; Situ, Haozhen: Reconstructing a quantum state with a variational autoencoder (2021)
  9. Chen, Jiaoyan; Hu, Pan; Jimenez-Ruiz, Ernesto; Holter, Ole Magnus; Antonyrajah, Denvar; Horrocks, Ian: \textttOWL2Vec*: embedding of OWL ontologies (2021)
  10. Chung, Eric; Leung, Wing Tat; Pun, Sai-Mang; Zhang, Zecheng: A multi-stage deep learning based algorithm for multiscale model reduction (2021)
  11. Ding, Man; Han, Congying; Guo, Tiande: High generalization performance structured self-attention model for knapsack problem (2021)
  12. Evans, Richard; Bošnjak, Matko; Buesing, Lars; Ellis, Kevin; Pfau, David; Kohli, Pushmeet; Sergot, Marek: Making sense of raw input (2021)
  13. Fan, Angela; Bhosale, Shruti; Schwenk, Holger; Ma, Zhiyi; El-Kishky, Ahmed; Goyal, Siddharth; Baines, Mandeep; Celebi, Onur; Wenzek, Guillaume; Chaudhary, Vishrav; Goyal, Naman; Birch, Tom; Liptchinsky, Vitaliy; Edunov, Sergey; Auli, Michael; Joulin, Armand: Beyond English-centric multilingual machine translation (2021)
  14. Feinauer, Christoph; Lucibello, Carlo: Reconstruction of pairwise interactions using energy-based models (2021)
  15. Flori, Andrea; Regoli, Daniele: Revealing pairs-trading opportunities with long short-term memory networks (2021)
  16. Gama, Ricardo; Fernandes, Hugo L.: A reinforcement learning approach to the orienteering problem with time windows (2021)
  17. Girin, Laurent; Leglaive, Simon; Bie, Xiaoyu; Diard, Julien; Hueber, Thomas; Alameda-Pineda, Xavier: Dynamical variational autoencoders: a comprehensive review (2021)
  18. Grabovoy, A. V.; Strijov, V. V.: Bayesian distillation of deep learning models (2021)
  19. Hao, Jie; Zhu, William: Architecture self-attention mechanism: nonlinear optimization for neural architecture search (2021)
  20. Ivek, Tomislav; Vlah, Domagoj: BlackBox: generalizable reconstruction of extremal values from incomplete spatio-temporal data (2021)