The face provides a rich canvas of emotion. Humans are innately programmed to express and communicate emotion through facial expressions. Affdex scientifically measures and reports the emotions and facial expressions using sophisticated computer vision and machine learning techniques.

Here are some links to other areas of interest:


When you use the Affdex SDK in your applications, you will receive facial expression output in the form of Affdex metrics: seven emotion metrics, 20 facial expression metrics, 13 emojis, and four appearance metrics.

Emotions

Anger

Contempt

Disgust

Fear

Joy

Sadness

Surprise

Furthermore, the SDK allows for measuring valence and engagement, as alternative metrics for measuring the emotional experience.

Engagement: A measure of facial muscle activation that illustrates the subject’s expressiveness. The range of values is from 0 to 100.

Valence: A measure of the positive or negative nature of the recorded person’s experience. The range of values is from -100 to 100.

How do we map facial expressions to emotions?

The Emotion predictors use the observed facial expressions as input to calculate the likelihood of an emotion. Read more about emotion mapping here.


Facial Expressions

Attention – Measure of focus based on the head orientation

Brow Furrow – Both eyebrows moved lower and closer together

Brow Raise – Both eyebrows moved upwards

Cheek Raise – Lifting of the cheeks, often accompanied by “crow’s feet” wrinkles at the eye corners

Chin Raise – The chin boss and the lower lip pushed upwards

Dimpler – The lip corners tightened and pulled inwards

Eye Closure – Both eyelids closed

Eye Widen – The upper lid raised sufficient to expose the entire iris

Inner Brow Raise – The inner corners of eyebrows are raised

Jaw Drop – The jaw pulled downwards

Lid Tighten – The eye aperture narrowed and the eyelids tightened

Lip Corner Depressor – Lip corners dropping downwards (frown)

Lip Press – Pressing the lips together without pushing up the chin boss

Lip Pucker – The lips pushed foward

Lip Stretch – The lips pulled back laterally

Lip Suck – Pull of the lips and the adjacent skin into the mouth

Mouth Open – Lower lip dropped downwards

Nose Wrinkle – Wrinkles appear along the sides and across the root of the nose due to skin pulled upwards

Smile – Lip corners pulling outwards and upwards towards the ears, combined with other indicators from around the face

Smirk – Left or right lip corner pulled upwards and outwards


Emoji Expressions

Laughing – Mouth opened and both eyes closed

Smiley – Smiling, mouth opened and both eyes opened

Relaxed – Smiling and both eyes opened

Wink – Either of the eyes closed

Kissing – The lips puckered and both eyes opened

Stuck Out Tongue – The tongue clearly visible

Stuck Out Tongue and Winking Eye – The tongue clearly visible and either of the eyes closed

Scream – The eyebrows raised and the mouth opened

Flushed – The eyebrows raised and both eyes widened

Smirk – Left or right lip corner pulled upwards and outwards

Disappointed – Frowning, with both lip corners pulled downwards

Rage – The brows furrowed, and the lips tightened and pressed

Neutral – Neutral face without any facial expressions


Using the Metrics

Emotion, Expression and Emoji metrics scores indicate when users show a specific emotion or expression (e.g., a smile) along with the degree of confidence. The metrics can be thought of as detectors: as the emotion or facial expression occurs and intensifies, the score rises from 0 (no expression) to 100 (expression fully present).

In addition, we also expose a composite emotional metric called valence which gives feedback on the overall experience. Valence values from 0 to 100 indicate a neutral to positive experience, while values from -100 to 0 indicate a negative to neutral experience.


Appearance

Our SDKs also provide the following metrics about the physical appearance:

Age

The age classifier attempts to estimate the age range. Supported ranges: Under 18, from 18 to 24, 25 to 34, 35 to 44, 45 to 54, 55 to 64, and 65 Plus.

Ethnicity

The ethnicity classifier attempts to identify the person’s ethnicity. Supported classes: Caucasian, Black African, South Asian, East Asian and Hispanic.

At the current level of accuracy, the ethnicity and age classifiers are more useful as a quantitative measure of demographics than to correctly identify the age and ethnicity on an individual basis. We are always looking to diversify the data sources included in training those metrics to improve their accuracy levels.

Gender

The gender classifier attempts to identify the human perception of gender expression.

In the case of video or live feeds, the Gender, Age and Ethnicity classifiers track a face for a window of time to build confidence in their decision. If the classifier is unable to reach a decision, the classifier value is reported as “Unknown”.

Glasses

A confidence level of whether the subject in the image is wearing eyeglasses or sunglasses.


Face Tracking and Head Angle Estimation

The SDKs include our latest face tracker which calculates the following metrics:

Facial Landmarks Estimation

The tracking of the cartesian coordinates for the facial landmarks. See the facial landmark mapping here.

Head Orientation Estimation

Estimation of the head position in a 3-D space in Euler angles (pitch, yaw, roll).

Interocular Distance

The distance between the two outer eye corners.