The Cohn-Kanade AU-Coded Facial Expression Database is for research in automatic facial image analysis and synthesis and for perceptual studies. Cohn-Kanade is available in two versions and a third is in preparation. Version 1 (the original or initial release (Kanade, Cohn, & Tian, 2000)) includes 486 sequences from 97 posers. Each sequence begins with a neutral expression and proceeds to a peak expression. The peak expression for each sequence is fully FACS (Ekman, Friesen, & Hager, 2002; Ekman & Friesen, 1979) coded and given an emotion label. The emotion label refers to what expression was requested rather than what may actually have been performed. For validated emotion labels, please use version 2, CK+, as described below. Version 2, referred to as CK+, includes both posed and non-posed (spontaneous) expressions and additional types of metadata. For posed expressions, the number of sequences is increased from the initial release by 22% and the number of subjects by 27%. As with the initial release, the target expression for each sequence is fully FACS coded. In addition validated emotion labels have been added to the metadata. Thus, sequences may be analyzed for both action units and prototypic emotions. The non-posed expressions are from Ambadar, Cohn, & Reed (2009). Additionally, CK+ provides protocols and baseline results for facial feature tracking and action unit and emotion recognition. Tracking results for shape and appearance are via the approach of Matthews & Baker (2004). For action unit and expression recognition, a linear support vector machine (SVM) classifier with leave-one-out subject cross-validation was used. Both sets of results are included with the metadata. For a full description of CK+, please see P. Lucey et al. (2010). Version 3 is planned for spring 2013. The original data collection of Cohn-Kanade included synchronized frontal and 30-degree from frontal video (fig. 1, below). Version 3 will add the synchronized 30-degree from frontal video. To receive the database for research, non-commercial use, download, sign, and return an Agreement to the Affect Analysis Group. All student or non-faculty agreement forms must be co-signed by a faculty advisor.

References in zbMATH (referenced in 61 articles )

Showing results 1 to 20 of 61.
Sorted by year (citations)

1 2 3 4 next

  1. Abiram, R. Nandhini; Vincent, P. M. Durai Raj: Identity preserving multi-pose facial expression recognition using fine tuned VGG on the latent space vector of generative adversarial network (2021)
  2. Tuncer, Turker; Dogan, Sengul; Subasi, Abdulhamit: A new fractal pattern feature generation function based emotion recognition method using EEG (2021)
  3. Daghyani, Masoud; Zamzami, Nuha; Bouguila, Nizar: Toward an efficient computation of log-likelihood functions in statistical inference: overdispersed count data clustering (2020)
  4. Daoudi, Mohamed; Alvarez Paiva, Juan-Carlos; Kacem, Anis: The Riemannian and affine geometry of facial expression and action recognition (2020)
  5. Jang, Jinhyeok; Cho, Hyunjoong; Kim, Jaehong; Lee, Jaeyeon; Yang, Seungjoon: Deep neural networks with a set of node-wise varying activation functions (2020)
  6. Liu, Yipeng; Ji, Zhongping; Zhang, Yu-Wei; Xu, Gang: Example-driven modeling of portrait bas-relief (2020)
  7. Najar, Fatma; Bourouis, Sami; Al-Azawi, Rula; Al-Badi, Ali: Online recognition via a finite mixture of multivariate generalized Gaussian distributions (2020)
  8. Qin, Shu; Zhu, Zhengzhou; Zou, Yuhang; Wang, Xiaowei: Facial expression recognition based on Gabor wavelet transform and 2-channel CNN (2020)
  9. Jain, Vanita; Lamba, Puneet Singh; Singh, Bhanu; Namboothiri, Narayanan; Dhall, Shafali: Facial expression recognition using feature level fusion (2019)
  10. Kucukoglu, Irem; Simsek, Buket; Simsek, Yilmaz: Multidimensional Bernstein polynomials and Bézier curves: analysis of machine learning algorithm for facial expression recognition based on curvature (2019)
  11. Lu, Yang; Wang, Shigang; Zhao, Wenting: Facial expression recognition based on discrete separable shearlet transform and feature selection (2019)
  12. Ahmed, Faisal; Kabir, Md. Hasanul: Facial expression recognition under difficult conditions: a comprehensive study on edge directional texture patterns (2018)
  13. Chu, Wen-Sheng; De la Torre, Fernando; Cohn, Jeffrey F.; Messinger, Daniel S.: A branch-and-bound framework for unsupervised common event discovery (2017)
  14. Quost, Benjamin; Denoeux, Thierry; Li, Shoumei: Parametric classification with soft labels using the evidential EM algorithm: linear discriminant analysis versus logistic regression (2017)
  15. Gaidhane, Vilas H.; Hote, Yogesh V.; Singh, Vijander: Emotion recognition using eigenvalues and Levenberg-Marquardt algorithm-based classifier (2016)
  16. Susan, Seba; Aggarwal, Nandini; Chand, Shefali; Gupta, Ayush: Image coding based on maximum entropy partitioning for identifying improbable intensities related to facial expressions (2016)
  17. Kamaruzaman, Fadhlan; Shafie, Amir Akramin; Mustafah, Yasir M.: Coincidence detection using spiking neurons with application to face recognition (2015)
  18. Özöğür-Akyüz, Süreyya; Windeatt, Terry; Smith, Raymond: Pruning of error correcting output codes by optimization of accuracy-diversity trade off (2015)
  19. Poria, Soujanya; Cambria, Erik; Hussain, Amir; Huang, Guang-Bin: Towards an intelligent framework for multimodal affective data analysis (2015) ioport
  20. An, Gaoyun; Liu, Shuai; Jin, Yi; Ruan, Qiuqi; Lu, Shan: Facial expression recognition based on discriminant neighborhood preserving nonnegative tensor factorization and ELM (2014)

1 2 3 4 next