The DARPA TIMIT Acoustic-Phonetic Continuous Speech Corpus. The Texas Instruments/Massachusetts Institute of Technology (TIMIT) corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems. TIMIT contains speech from 630 speakers representing 8 major dialect divisions of American English, each speaking 10 phonetically-rich sentences. The TIMIT corpus includes time-aligned orthographic, phonetic, and word transcriptions, as well as speech waveform data for each spoken sentence

References in zbMATH (referenced in 30 articles )

Showing results 1 to 20 of 30.
Sorted by year (citations)

1 2 next

  1. Akuzawa, Kei; Iwasawa, Yusuke; Matsuo, Yutaka: Information-theoretic regularization for learning global features by sequential VAE (2021)
  2. Mirco Ravanelli; Titouan Parcollet; et al: SpeechBrain: A General-Purpose Speech Toolkit (2021) arXiv
  3. Fang, Yuzhuo; Xu, Zhiyong: Multiple sound source localization and counting using one pair of microphones in noisy and reverberant environments (2020)
  4. Fayek, Haytham M.; Cavedon, Lawrence; Wu, Hong Ren: Progressive learning: a deep learning framework for continual learning (2020)
  5. Geete, Kanu; Pandey, Manish: A noise-based stabilizer for convolutional neural networks (2019)
  6. Jing, Li; Gulcehre, Caglar; Peurifoy, John; Shen, Yichen; Tegmark, Max; Soljacic, Marin; Bengio, Yoshua: Gated orthogonal recurrent units: on learning to forget (2019)
  7. May, Avner; Garakani, Alireza Bagheri; Lu, Zhiyun; Guo, Dong; Liu, Kuan; Bellet, Aurélien; Fan, Linxi; Collins, Michael; Hsu, Daniel; Kingsbury, Brian; Picheny, Michael; Sha, Fei: Kernel approximation methods for speech recognition (2019)
  8. Młynarski, Wiktor; Mcdermott, Josh H.: Learning midlevel auditory codes from natural sound statistics (2018)
  9. Venkatesan, R.; Balaji Ganesh, A.: Binaural classification-based speech segregation and robust speaker recognition system (2018)
  10. Exarchakis, Georgios; Lücke, Jörg: Discrete sparse coding (2017)
  11. Prasad, N.; Kishore Kumar, T.: Speech bandwidth extension aided by magnitude spectrum data hiding (2017)
  12. Atkins, Jamin; Sharma, Davinder Pal: Visualization of babble-speech interactions using Andrews curves (2016) ioport
  13. Hopfield, John J.: Understanding emergent dynamics: using a collective activity coordinate of a neural network to recognize time-varying patterns (2015)
  14. Trawicki, M. B.; Johnson, M. T.: Beta-order minimum mean-square error multichannel spectral amplitude estimation for speech enhancement (2015)
  15. Jan Koutník, Klaus Greff, Faustino Gomez, Jürgen Schmidhuber: A Clockwork RNN (2014) arXiv
  16. Ma, Zhanyu; Rana, Pravin Kumar; Taghia, Jalil; Flierl, Markus; Leijon, Arne: Bayesian estimation of Dirichlet mixture model with variational inference (2014)
  17. Hermans, Michiel; Schrauwen, Benjamin: Recurrent kernel machines: computing with infinite echo state networks (2012)
  18. Vuppala, Anil Kumar; Rao, K. Sreenivasa; Chakrabarti, Saswat: Spotting and recognition of consonant-vowel units from continuous speech using accurate detection of vowel onset points (2012) ioport
  19. Bahoura, Mohammed; Ezzaidi, Hassan: FPGA-implementation of parallel and sequential architectures for adaptive noise cancelation (2011) ioport
  20. Kühne, Marco; Togneri, Roberto; Nordholm, Sven: A novel fuzzy clustering algorithm using observation weighting and context information for reverberant blind speech separation (2010)

1 2 next