Using a webcam is a common way to obtain video for facial expression detection. The CameraDetector can access a webcam connected to the device to capture frames and feed them directly to the facial expression engine.

You can try it out in JSFiddle

Creating the detector

The CameraDetector constructor expects four parameters { divRoot, width, height, faceMode }

/*
   SDK Needs to create video and canvas nodes in the DOM in order to function
   Here we are adding those nodes a predefined div.
*/
var divRoot = $("#affdex_elements")[0];

// The captured frame's width in pixels
var width = 640;

// The captured frame's height in pixels
var height = 480;

/*
   Face detector configuration - If not specified, defaults to
   affdex.FaceDetectorMode.LARGE_FACES
   affdex.FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
   affdex.FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
*/
var faceMode = affdex.FaceDetectorMode.LARGE_FACES;

//Construct a CameraDetector and specify the image width / height and face detector mode.
var detector = new affdex.CameraDetector(divRoot, width, height, faceMode);

Configuring the callback functions

The Detectors use callbacks to communicate events and results. For each action there are two callbacks. A success callback is called when an action successfully completes, and a failure callback is called in case of an action failure.

The functions addEventListener and removeEventListener are used to register or deregister a callback.

onInitialize

This action occurs at the end of a detector initialization process.

detector.addEventListener("onInitializeSuccess", function() {});
detector.addEventListener("onInitializeFailure", function() {});
onImageResults

This action occurs at the end of the processing of a video frame.

/* 
  onImageResults success is called when a frame is processed successfully and receives 3 parameters:
  - Faces: Dictionary of faces in the frame keyed by the face id.
           For each face id, the values of detected emotions, expressions, appearane metrics 
           and coordinates of the feature points
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: The timestamp of the captured image in seconds.
*/
detector.addEventListener("onImageResultsSuccess", function (faces, image, timestamp) {});

/* 
  onImageResults success receives 3 parameters:
  - image: An imageData object containing the pixel values for the processed frame.
  - timestamp: An imageData object contain the pixel values for the processed frame.
  - err_detail: A string contains the encountered exception.
*/
detector.addEventListener("onImageResultsFailure", function (image, timestamp, err_detail) {});
onReset

This action occurs at the conclusion of detector.reset()

detector.addEventListener("onResetSuccess", function() {});
detector.addEventListener("onResetFailure", function() {});
onStop

This action occurs at the end of a detector.stop() after terminating the web worker and stopping the frame processing.

detector.addEventListener("onStopSuccess", function() {});
detector.addEventListener("onStopFailure", function() {});
onWebCamConnect

It occurs when the camera detector tries to connect to a webcam one of two possible callbacks can occur:

detector.addEventListener("onWebcamConnectSuccess", function() {
	console.log("I was able to connect to the camera successfully.");
});

detector.addEventListener("onWebcamConnectFailure", function() {
	console.log("I've failed to connect to the camera :(");
});

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile, joy and gender classifiers:

// Track smiles
detector.detectExpressions.smile = true;

// Track joy emotion
detector.detectEmotions.joy = true;

// Detect person's gender
detector.detectAppearance.gender = true;

To turn on or off the detection of all expressions, appearances, emotions or emojis:

detector.detectAllExpressions();
detector.detectAllEmotions();
detector.detectAllEmojis();
detector.detectAllAppearance();

The list of possible metrics that affdex detects can be found here

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

A web worker is created for the processing. The worker downloads the SDK runtime and the classifier data files required to process images. It initializes the SDK runtime.

In addition, it attempts to connect to the web camera to capture video frames.

Stopping the detector

At the end of the interaction with the detector the web worker thread can be terminated by calling stop():

detector.stop();

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();