Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal).
To meet this need, we have created AffectNet, a new database of facial expressions in the wild, by collecting and annotating facial images. AffectNet contains more than 1M facial images collected from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images (~440K) were manually annotated for the presence of seven discrete facial expressions (categorial model) and the intensity of valence and arousal (dimensional model). AffectNet is by far the largest database of facial expressions, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.
For downloading AffectNet, fill out the request form in HERE.
You can download our paper HERE.
All papers (or any publically available text) that use the compound or partial images of the database must cite the following paper:
Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor, “AffectNet: A New Database for Facial Expression, Valence, and Arousal Computation in the Wild”, IEEE Transactions on Affective Computing, 2017.
Some clarifications about AffectNet:
Currently the test set is not released. We are planning to organize a challenge on AffectNet in near future and the test set will be used to evaluate participants’ methods & algorithms. The following table gives the results of our experiments on the validation set using our baseline methods (described in our paper) trained on the training set. The first table is the results on classifying 7 expressions and the second presents the results of Valence/Arousal classifications. We suggest researchers use the validation results as a baseline for comparison until the test set is released.
The total numbers of manually annotated images in the training and validation sets in each category of emotioins are given in the follwoing Table.