classdoc: CameraDetector [c++]

Using a webcam is a common way to obtain video for facial expression detection. The CameraDetector can access a webcam connected to the device to capture frames and feed them directly to the facial expression engine.

Creating the detector

The CameraDetector constructor expects five parameters { cameraId, cameraFPS, processFrameRate, maxNumFaces and faceConfig }

CameraDetector(
              /**
                The CameraId
                If not specified, default=0
              */
              int cameraId,

              /**
                The number of frames to capture per second.
                If not specified, default=15
              */
              int cameraFPS,

              /**
                The maximum number of frames processed per second
                If not specified, DEFAULT_PROCESSING_FRAMERATE=30
              */
              float processFrameRate,

              /**
                The maximum number of faces to track
                If not specified, DEFAULT_MAX_NUM_FACES=1
              */
              unsigned int maxNumFaces,

              /**
                Face detector configuration - If not specified, defaults to FaceDetectorMode.LARGE_FACES
                  FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
                  FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
              */
              FaceDetectorMode faceConfig

);

Example,

affdex::CameraDetector detector;

Configuring the detector

In order to initialize the detector, a valid location of the data folder must be specified:

Data folder

The Affdex classifier data files are used in frame analysis processing. These files are supplied as part of the SDK. The location of the data files on the physical storage needs to be passed to a detector in order to initialize it by calling the following with the fully qualified path to the folder containing them:

std::string classifierPath="/home/abdo/affdex-sdk/data"
detector.setClassifierPath(classifierPath);

Camera

As an alternative to specifying the camera id and frame rate in the constructor. Those parameters can be set be calling setCameraId and setCameraFPS:

int camId = 10;
int camFPS = 60;

detector.setCameraId(camId);
detector.setCameraFPS(camFPS);

Configuring the callback functions

The Detectors use callback functions defined in interface classes to communicate events and results. The event listeners need to be initialized before the detector is started: The FaceListener is a client callback interface which sends notification when the detector has started or stopped tracking a face. Call setFaceListener to set the FaceListener:

classdoc: FaceListener [c++]

class MyApp : public affdex::FaceListener {
public:
  MyApp() {
    detector.setFaceListener(this);
  }

private:
  affdex::Detector detector;
};

The ImageListener is a client callback interface which delivers information about an image which has been handled by the Detector. Call setImageListener to set the ImageListener:

classdoc: ImageListener [c++]

class MyApp : public affdex::ImageListener {
public:
  MyApp() {
    detector.setImageListener(this);
  }

private:
  affdex::Detector detector;
};

The ProcessStatusListener is a callback interface which provides information regarding the processing state of the detector. Call setProcessStatusListener to set the ProcessStatusListener:

classdoc: ProcessStatusListener [c++]

class MyApp : public affdex::ProcessStatusListener {
public:
  MyApp() {
    detector.setProcessStatusListener(this);
  }

private:
  affdex::Detector detector;
};

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile and joy classifiers:

detector.setDetectSmile(true);
detector.setDetectJoy(true);

To turn on or off the detection of all expressions, emotions or emojis:

detector.setDetectAllExpressions(true);
detector.setDetectAllEmotions(true);
detector.setDetectAllEmojis(true);
detecor.setDetectAllAppearances(true);

To check the status of a classifier at any time, for example smile:

detector.getDetectSmile();

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

The start() spawns a thread the connects to the specified camera to capture video frames then processes them and use the callback functions to notify of the captured frames, results and exceptions (if exist). start() is a non-blocking call.

Stopping the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

detector.stop();

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();