AffectNet

AffectNet

Existing annotated databases of facial expressions in the wild are small and mostly cover discrete emotions (aka the categorical model). There are very limited annotated facial databases for affective computing in the continuous dimensional model (e.g., valence and arousal).

To meet this need, we have created AffectNet, a new database of facial expressions in the wild, by collecting and annotating facial images. AffectNet contains more than 1M facial images collected from the Internet by querying three major search engines using 1250 emotion related keywords in six different languages. About half of the retrieved images (~440K) were manually annotated for the presence of seven discrete facial expressions (categorial model) and the intensity of valence and arousal (dimensional model). AffectNet is by far the largest database of facial expressions, valence, and arousal in the wild enabling research in automated facial expression recognition in two different emotion models. Two baseline deep neural networks are used to classify images in the categorical model and predict the intensity of valence and arousal. Various evaluation metrics show that our deep neural network baselines can perform better than conventional machine learning methods and off-the-shelf facial expression recognition systems.

For downloading AffectNet, Only Lab Managers or Professors can request AffectNet by downloading and completing and signing The LICENSE AGREEMENT FILE. Once the agreement is completed, use the following form to submit your request. Make sure you attach the agreement file to the form.
AffectNet Request form is HERE .

STUDENTS: Please ask your academic advisor/supervisor to request access to AffectNet.

You can download our paper HERE.

All papers (or any publicly available text) that use the compound or partial images of the database must cite the following paper:

Ali Mollahosseini, Behzad Hasani, and Mohammad H. Mahoor, “AffectNet: A New Database for Facial Expression, Valence, and Arousal Computation in the Wild”, IEEE Transactions on Affective Computing, 2017.

Some clarifications about AffectNet:

Currently the test set is not released. We are planning to organize a challenge on AffectNet in near future and the test set will be used to evaluate participants’ methods & algorithms. The following table gives the results of our experiments on the validation set using our baseline methods (described in our paper) trained on the training set. The first table is the results of classifying 8 expressions and the second presents the results of Valence/Arousal classifications. We suggest researchers use the validation results as a baseline for comparison until the test set is released.

The total numbers of manually annotated images in the training and validation sets in each category of emotions are given in the following Table.