classdoc: [PhotoDetector]

The PhotoDetector class is used for streamlining the processing of still images. Since photos lack any continuity over time, the expression and emotion detection is performed independently on each frame and the timestamp is ignored. Due to this fact, the underlying emotion detection may return different results than the video based detectors.

Creating the detector

The PhotoDetector constructor expects three parameters { context, maxNumFaces and faceDetectorMode }

public PhotoDetector(
              /**
                The application context.
              */
              Context context,

              /**
                The maximum number of faces to track
                If not specified, default=1
              */
              int maxNumFaces,

              /**
                Face detector configuration - If not specified, defaults to FaceDetectorMode.SMALL_FACES
                  FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
                  FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
              */
              FaceDetectorMode faceConfig
);

Example,

PhotoDetector detector = new PhotoDetector(this);

Configuring the callback functions

The Detectors use callback functions defined in interface classes to communicate events and results. The event listeners need to be initialized before the detector is started: The FaceListener is a client callback interface which sends notification when the detector has started or stopped tracking a face. Call setFaceListener to set the FaceListener:

classdoc: FaceListener [java]

public class MyActivity implements Detector.FaceListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setFaceListener(this);
  }
};

The ImageListener is a client callback interface which delivers information about an image which has been handled by the Detector. Call setImageListener to set the ImageListener:

classdoc: ImageListener [java]

public class MyActivity implements Detector.ImageListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setImageListener(this);
  }

  //The follow code sample shows an example of how to retrieve metric values from the Face object
  @Override
  public void onImageResults(List<Face> faces, Frame image,float timestamp) {

      if (faces == null)
          return; //frame was not processed

      if (faces.size() == 0)
          return; //no face found

      //For each face found
      for (int i = 0 ; i < faces.size() ; i++) {
        Face face = faces.get(i);

        int faceId = face.getId();

        //Appearance
        Face.GENDER genderValue = face.appearance.getGender();
        Face.GLASSES glassesValue = face.appearance.getGlasses();
        Face.AGE ageValue = face.appearance.getAge();
        Face.ETHNICITY ethnicityValue = face.appearance.getEthnicity();


        //Some Emoji
        float smiley = face.emojis.getSmiley();
        float laughing = face.emojis.getLaughing();
        float wink = face.emojis.getWink();


        //Some Emotions
        float joy = face.emotions.getJoy();
        float anger = face.emotions.getAnger();
        float disgust = face.emotions.getDisgust();

        //Some Expressions
        float smile = face.expressions.getSmile();
        float brow_furrow = face.expressions.getBrowFurrow();
        float brow_raise = face.expressions.getBrowRaise();

        //Measurements
        float interocular_distance = face.measurements.getInterocularDistance();
        float yaw = face.measurements.orientation.getYaw();
        float roll = face.measurements.orientation.getRoll();
        float pitch = face.measurements.orientation.getPitch();

        //Face feature points coordinates
        PointF[] points = face.getFacePoints();

      }
  }
};

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile and joy classifiers:

detector.setDetectSmile(true);
detector.setDetectJoy(true);

To turn on or off the detection of all expressions, emotions, emojis, or appearances:

detector.setDetectAllExpressions(true);
detector.setDetectAllEmotions(true);
detector.setDetectAllEmojis(true);
detector.setDetectAllAppearances(true);

To check the status of a classifier at any time, for example smile:

detector.getDetectSmile();

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

To check if the detector is running,

bool running = detector.isRunning();

Processing a photo

After successfully starting the detector, photos can be processed by calling the process() method. process() expects an Affdex.Frame (Bitmap, RGBA, and YUV420sp/NV21 formats are supported).

classdoc: Frame java

For example to process an YUV420sp/NV21 frame:

byte [] frameData;
int width = 640;
int height = 480;

Frame.ByteArrayFrame frame = new Frame.ByteArrayFrame(frameData, width, height,
                                                      Frame.COLOR_FORMAT.YUV_NV21);
detector.process(frame);

Stopping the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

detector.stop();

Be sure to always call stop() following a successful call to start() (including for example, in circumstances where you abort processing, such as in exception catch blocks). This ensures that resources held by the Detector instance are released.

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();