The PhotoDetector is used for streamlining the processing of still images. Since photos lack any continuity over time, the expression and emotion detection is performed independently on each frame and the timestamp is ignored. Due to this fact, the underlying emotion detection may return different results than the video based detectors.

You can try it out in JSFiddle

Creating the detector

The PhotoDetector constructor expects a parameter { faceMode }

/*
   Face detector configuration - If not specified, defaults to       
   affdex.FaceDetectorMode.LARGE_FACES
   affdex.FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
   affdex.FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
*/
var faceMode = affdex.FaceDetectorMode.LARGE_FACES;

//Construct a PhotoDetector and specify the image width / height and face detector mode.
var detector = new affdex.PhotoDetector(faceMode);

Configuring the callback functions

The Detectors use callbacks to communicate events and results. For each action there are two callbacks. A success callback is called when an action successfully completes, and a failure callback is called in case of an action failure.

The functions addEventListener and removeEventListener are used to register or deregister a callback.

onInitialize

This action occurs at the end of a detector initialization process.

detector.addEventListener("onInitializeSuccess", function() {});
detector.addEventListener("onInitializeFailure", function() {});
onImageResults

This action occurs at the end of the processing of a video frame.

/* 
  onImageResults success is called when a frame is processed successfully and receives 3 parameters:
  - Faces: Dictionary of faces in the frame keyed by the face id.
           For each face id, the values of detected emotions, expressions, appearane metrics 
           and coordinates of the feature points
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: The timestamp of the captured image in seconds.
*/
detector.addEventListener("onImageResultsSuccess", function (faces, image, timestamp) {});

/* 
  onImageResults success receives 3 parameters:
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: An imageData object contain the pixel values for the processed frame.
  - err_detail: A string contains the encountered exception.
*/
detector.addEventListener("onImageResultsFailure", function (image, timestamp, err_detail) {});
onReset

This action occurs at the conclusion of detector.reset()

detector.addEventListener("onResetSuccess", function() {});
detector.addEventListener("onResetFailure", function() {});
onStop

This action occurs at the end of a detector.stop() after terminating the web worker and stopping the frame processing.

detector.addEventListener("onStopSuccess", function() {});
detector.addEventListener("onStopFailure", function() {});

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile, joy and gender classifiers:

// Track smiles
detector.detectExpressions.smile = true;

// Track joy emotion
detector.detectEmotions.joy = true;

// Detect person's gender
detector.detectAppearance.gender = true;

To turn on or off the detection of all expressions, appearances, emotions or emojis:

detector.detectAllExpressions();
detector.detectAllEmotions();
detector.detectAllEmojis();
detector.detectAllAppearance();

The list of possible metrics that affdex detects can be found here

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

A web worker is created for the processing. The worker downloads the SDK runtime and the classifier data files required to process images. It initializes the SDK runtime.

Processing a frame

After successfully initializing the detector using the start method. The frames can be passed to the detector by calling the process method. The process method expects an imageData object.


//Get a canvas element from DOM
var aCanvas = document.getElementById("canvas");
var context = aCanvas.getContext('2d');

//Get imageData object.
var imageData = context.getImageData(0, 0, 640, 480);

//Process the frame
detector.process(imageData);

Stopping the detector

At the end of the interaction with the detector the web worker thread can be terminated by calling stop():

detector.stop();

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();