Using a frame stream is a common way to obtain video for facial expression detection. The FrameDetector can track expressions in a sequence of real-time frames. It expects each frame to have a timestamp that indicates the time the frame was captured. The timestamps arrive in an increasing order. The FrameDetector will detect a face in an frame and deliver information on it to you, including the facial expressions.

Creating the detector

The FrameDetector constructor expects a parameter { faceMode }

/*
   Face detector configuration - If not specified, defaults to F
   affdex.FaceDetectorMode.LARGE_FACES
   affdex.FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
   affdex.FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
*/
var faceMode = affdex.FaceDetectorMode.LARGE_FACES;

//Construct a FrameDetector and specify the image width / height and face detector mode.
var detector = new affdex.FrameDetector(faceMode);

Configuring the callback functions

The Detectors use callbacks to communicate events and results. For each action there are two callbacks. A success callback is called when an action successfully completes, and a failure callback is called in case of an action failure.

The functions addEventListener and removeEventListener are used to register or deregister a callback.

onInitialize

This action occurs at the end of a detector initialization process.

detector.addEventListener("onInitializeSuccess", function() {});
detector.addEventListener("onInitializeFailure", function() {});
onImageResults

This action occurs at the end of the processing of a video frame.

/* 
  onImageResults success is called when a frame is processed successfully and receives 3 parameters:
  - Faces: Dictionary of faces in the frame keyed by the face id.
           For each face id, the values of detected emotions, expressions, appearane metrics 
           and coordinates of the feature points
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: The timestamp of the captured image in seconds.
*/
detector.addEventListener("onImageResultsSuccess", function (faces, image, timestamp) {});

/* 
  onImageResults success receives 3 parameters:
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: An imageData object contain the pixel values for the processed frame.
  - err_detail: A string contains the encountered exception.
*/
detector.addEventListener("onImageResultsFailure", function (image, timestamp, err_detail) {});
onReset

This action occurs at the conclusion of detector.reset()

detector.addEventListener("onResetSuccess", function() {});
detector.addEventListener("onResetFailure", function() {});
onStop

This action occurs at the end of a detector.stop() after terminating the web worker and stopping the frame processing.

detector.addEventListener("onStopSuccess", function() {});
detector.addEventListener("onStopFailure", function() {});

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile, joy and gender classifiers:

// Track smiles
detector.detectExpressions.smile = true;

// Track joy emotion
detector.detectEmotions.joy = true;

// Detect person's gender
detector.detectAppearance.gender = true;

To turn on or off the detection of all expressions, appearances, emotions or emojis:

detector.detectAllExpressions();
detector.detectAllEmotions();
detector.detectAllEmojis();
detector.detectAllAppearance();

The list of possible metrics that affdex detects can be found here

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

A web worker is created for the processing. The worker downloads the SDK runtime and the classifier data files required to process images. It initializes the SDK runtime.

Processing a frame

After successfully initializing the detector using the start method. The frames can be passed to the detector by calling the process method. The process method expects an imageData object and the delta time in seconds. The delta time is calculated by subtracting the current time from the time of the initial frame processed.


//Get a canvas element from DOM
var aCanvas = document.getElementById("canvas");
var context = aCanvas.getContext('2d');

//Cache the timestamp of the first frame processed
var startTimestamp = (new Date()).getTime() / 1000;

//Get imageData object.
var imageData = context.getImageData(0, 0, 640, 480);

//Get current time in seconds
var now = (new Date()).getTime() / 1000;

//Get delta time between the first frame and the current frame.
var deltaTime = now - startTimestamp;

//Process the frame
detector.process(imageData, deltaTime);

Stopping the detector

At the end of the interaction with the detector the web worker thread can be terminated by calling stop():

detector.stop();

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();