classdoc: [FrameDetector]

The FrameDetector tracks expressions in a sequence of real-time frames. It expects each frame to have a timestamp that indicates the time the frame was captured. The timestamps arrive in an increasing order. The FrameDetector will detect a face in an frame and deliver information on it to you, including the facial expressions.

Creating the detector

The FrameDetector constructor expects three parameters { context, maxNumFaces and faceDetectorMode }

public FrameDetector(
              /**
                The application context.
              */
              Context context,

              /**
                The maximum number of faces to track
                If not specified, default=1
              */
              int maxNumFaces,

              /**
                Face detector configuration - If not specified, defaults to FaceDetectorMode.LARGE_FACES
                  FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
                  FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
              */
              FaceDetectorMode faceConfig
);

Example,

FrameDetector detector = new FrameDetector(this);

Configuring the callback functions

The Detectors use callback functions defined in interface classes to communicate events and results. The event listeners need to be initialized before the detector is started: The FaceListener is a client callback interface which sends notification when the detector has started or stopped tracking a face. Call setFaceListener to set the FaceListener:

classdoc: FaceListener [java]

public class MyActivity implements Detector.FaceListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setFaceListener(this);
  }
};

The ImageListener is a client callback interface which delivers information about an image which has been handled by the Detector. Call setImageListener to set the ImageListener:

classdoc: ImageListener [java]

public class MyActivity implements Detector.ImageListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setImageListener(this);
  }

  //The follow code sample shows an example of how to retrieve metric values from the Face object
  @Override
  public void onImageResults(List<Face> faces, Frame image,float timestamp) {

      if (faces == null)
          return; //frame was not processed

      if (faces.size() == 0)
          return; //no face found

      //For each face found
      for (int i = 0 ; i < faces.size() ; i++) {
        Face face = faces.get(i);

        int faceId = face.getId();

        //Appearance
        Face.GENDER genderValue = face.appearance.getGender();
        Face.GLASSES glassesValue = face.appearance.getGlasses();
        Face.AGE ageValue = face.appearance.getAge();
        Face.ETHNICITY ethnicityValue = face.appearance.getEthnicity();


        //Some Emoji
        float smiley = face.emojis.getSmiley();
        float laughing = face.emojis.getLaughing();
        float wink = face.emojis.getWink();


        //Some Emotions
        float joy = face.emotions.getJoy();
        float anger = face.emotions.getAnger();
        float disgust = face.emotions.getDisgust();

        //Some Expressions
        float smile = face.expressions.getSmile();
        float brow_furrow = face.expressions.getBrowFurrow();
        float brow_raise = face.expressions.getBrowRaise();

        //Measurements
        float interocular_distance = face.measurements.getInterocularDistance();
        float yaw = face.measurements.orientation.getYaw();
        float roll = face.measurements.orientation.getRoll();
        float pitch = face.measurements.orientation.getPitch();

        //Face feature points coordinates
        PointF[] points = face.getFacePoints();

      }
  }
};

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile and joy classifiers:

detector.setDetectSmile(true);
detector.setDetectJoy(true);

To turn on or off the detection of all expressions, emotions, emojis, or appearances:

detector.setDetectAllExpressions(true);
detector.setDetectAllEmotions(true);
detector.setDetectAllEmojis(true);
detector.setDetectAllAppearances(true);

To check the status of a classifier at any time, for example smile:

detector.getDetectSmile();

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

To check if the detector is running,

bool running = detector.isRunning();

Processing a frame

After successfully initializing the detector using the start method. The frames can be passed to the detector by calling the process method. The process method expects a Frame. For each frame, create an Affdex.Frame (Bitmap, RGBA, and YUV420sp/NV21 formats are supported).

For example to process an YUV420sp/NV21 frame captured from the camera.

classdoc: Frame java

byte [] frameData;
int width = 640;
int height = 480;
float timeStamp = (float)SystemClock.elapsedRealtime()/1000f;

Frame.ByteArrayFrame frame = new Frame.ByteArrayFrame(frameData, width, height,
                                                      Frame.COLOR_FORMAT.YUV_NV21);
frame.setTargetRotation(rotation);
detector.process(frame, timeStamp);  

The FrameDetector uses the timestamp field to keep track of time. Therefore, make sure it is set to a positive number that increases with each subsequent frame passed for processing.

Stopping the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

detector.stop();

Be sure to always call stop() following a successful call to start() (including for example, in circumstances where you abort processing, such as in exception catch blocks). This ensures that resources held by the Detector instance are released.

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();