classdoc: CameraDetector [java]

Using a webcam is a common way to obtain video for facial expression detection. The CameraDetector can access a webcam connected to the device to capture frames and feed them directly to the facial expression engine.

Creating the detector

The CameraDetector constructor must be called from the app’s main thread (i.e., from within the main activity’s onCreate() method). There are two signatures - one that needs three parameters { context, cameraType, and cameraPreviewView } and a more complex one that needs five parameters { context, cameraType, cameraPreviewView, maxNumFaces and faceDetectorMode }. The 3-argument constructor implements single-face detection and large face mode.

public CameraDetector(
              /**
                The application context.
              */
              Context context,

              /**
                The enum represent which camera to use (Front or Back)
              */
              CameraType cameraType,

              /**
                 A SurfaceView to use as a camera preview.
              */
              SurfaceView cameraPreviewView,

              /**
                The maximum number of faces to track
                If not specified, default=1
              */
              int maxNumFaces,

              /**
                Face detector configuration - If not specified, defaults to FaceDetectorMode.LARGE_FACES
                  FaceDetectorMode.LARGE_FACES=Faces occupying large portions of the frame
                  FaceDetectorMode.SMALL_FACES=Faces occupying small portions of the frame
              */
              FaceDetectorMode faceConfig
);

Example,

CameraDetector detector = new CameraDetector(this, CameraType.CAMERA_FRONT);
CameraDetector detector = new CameraDetector(this, CameraType.CAMERA_FRONT,
                                             camServiceView, 1, FaceDetectorMode.LARGE_FACES);

The CameraDetector stretches the captured images to fit the entire SurfaceView, so it is the responsibility of the developer to size the SurfaceView to have the same aspect ratio as the returned camera images. The OnCameraEventListener interface reports the selected camera frame size.

As of SDK 2.0, it is no longer possible to submit a null value for the SurfaceView. The Android API requires a Surface for its camera to function.

Please do not register for the SurfaceHolder.Callback interface belonging to this SurfaceView, as that interface is managed by the SDK.

Configuring the detector

Sizing the SurfaceView

Aside from the convenience of managing the Android Camera, CameraDetector also takes care of choosing the frame rate and frame size that will work best with the SDK. Since it is the developer’s responsibility to layout and size the SurfaceView passed into CameraDetector, you may want to resize the SurfaceView to match the aspect ratio of the returned frames. For this purpose, implement the CameraDetector.OnCameraEventListener interface to receive the onCameraSizeSelected event. Below is a block of sample code showing how to resize the SurfaceView to occupy as much space as its parent container while matching the aspect ratio of the incoming camera frames.

@Override
public void onCameraSizeSelected(int cameraWidth, int cameraHeight, ROTATE rotation) {
    int cameraPreviewWidth;
    int cameraPreviewHeight;    	

//cameraWidth and cameraHeight report the unrotated dimensions of the camera frames, so switch the width and height if necessary

    if (rotation == ROTATE.BY_90_CCW || rotation == ROTATE.BY_90_CW) {
      cameraPreviewWidth = cameraHeight;
      cameraPreviewHeight = cameraWidth;
    } else {
      cameraPreviewWidth = cameraWidth;
      cameraPreviewHeight = cameraHeight;
    }

//retrieve the width and height of the ViewGroup object containing our SurfaceView (in an actual application, we would want to consider the possibility that the mainLayout object may not have been sized yet)

    int layoutWidth = mainLayout.getWidth();
    int layoutHeight = mainLayout.getHeight();

//compute the aspect Ratio of the ViewGroup object and the cameraPreview

    float layoutAspectRatio = (float)layoutWidth/layoutHeight; 	
    float cameraPreviewAspectRatio = (float)cameraWidth/cameraHeight;

    int newWidth;
    int newHeight;

    if (cameraPreviewAspectRatio > layoutAspectRatio) {
      newWidth = layoutWidth;
      newHeight =(int) (layoutWidth / cameraPreviewAspectRatio);
    } else {
      newWidth = (int) (layoutHeight * cameraPreviewAspectRatio);
      newHeight = layoutHeight;
    }

//size the SurfaceView

    ViewGroup.LayoutParams params = surfaceView.getLayoutParams();
    params.height = newHeight;
    params.width = newWidth;
    surfaceView.setLayoutParams(params);
}

Hiding the SurfaceView

Some applications may not wish to display the camera preview on screen. Since Android requires an active Surface for the camera to function, CameraDetector always requires a SurfaceView to be passed in. However, if you do not wish to display the preview, you can set the SurfaceView to be 1px by 1px and call SurfaceView.setAlpha(0) to hide it on-screen.

Processing

The process rate (i.e., number of video frames processed per second) can be controlled by calling setMaxProcessRate. The greater the rate, the more CPU intensive the processing is, but the minimum recommended for quality emotion detection is 5 frames per second.

int rate = 10;
detector.setMaxProcessRate(rate);

The onImageResults callback will be skipped for the unprocessed frames unless

detector.setSendUnprocessFrames(true);

Configuring the callback functions

The Detectors use callback functions defined in interface classes to communicate events and results. The event listeners need to be initialized before the detector is started: The FaceListener is a client callback interface which sends notification when the detector has started or stopped tracking a face. Call setFaceListener to set the FaceListener:

classdoc: FaceListener [java]

public class MyActivity implements Detector.FaceListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setFaceListener(this);
  }
};

The ImageListener is a client callback interface which delivers information about an image which has been handled by the Detector. Call setImageListener to set the ImageListener:

classdoc: ImageListener [java]

public class MyActivity implements Detector.ImageListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setImageListener(this);
  }

  //The follow code sample shows an example of how to retrieve metric values from the Face object
  @Override
  public void onImageResults(List<Face> faces, Frame image,float timestamp) {

      if (faces == null)
          return; //frame was not processed

      if (faces.size() == 0)
          return; //no face found

      //For each face found
      for (int i = 0 ; i < faces.size() ; i++) {
        Face face = faces.get(i);

        int faceId = face.getId();

        //Appearance
        Face.GENDER genderValue = face.appearance.getGender();
        Face.GLASSES glassesValue = face.appearance.getGlasses();
        Face.AGE ageValue = face.appearance.getAge();
        Face.ETHNICITY ethnicityValue = face.appearance.getEthnicity();


        //Some Emoji
        float smiley = face.emojis.getSmiley();
        float laughing = face.emojis.getLaughing();
        float wink = face.emojis.getWink();


        //Some Emotions
        float joy = face.emotions.getJoy();
        float anger = face.emotions.getAnger();
        float disgust = face.emotions.getDisgust();

        //Some Expressions
        float smile = face.expressions.getSmile();
        float brow_furrow = face.expressions.getBrowFurrow();
        float brow_raise = face.expressions.getBrowRaise();

        //Measurements
        float interocular_distance = face.measurements.getInterocularDistance();
        float yaw = face.measurements.orientation.getYaw();
        float roll = face.measurements.orientation.getRoll();
        float pitch = face.measurements.orientation.getPitch();

        //Face feature points coordinates
        PointF[] points = face.getFacePoints();

      }
  }
};

The CameraEventListener is a client callback interface which delivers information about the SurfaceView size. Call setOnCameraEventListener to set the CameraEventListener:

public class MyActivity implements CameraDetector.CameraEventListener {
  @Override
  protected void onCreate(Bundle savedInstanceState) {
    detector.setOnCameraEventListener(this);
  }
};

Choosing the classifiers

The next step is to turn on the detection of the metrics needed. For example, to turn on or off the detection of the smile and joy classifiers:

detector.setDetectSmile(true);
detector.setDetectJoy(true);

To turn on or off the detection of all expressions, emotions, emojis, or appearances:

detector.setDetectAllExpressions(true);
detector.setDetectAllEmotions(true);
detector.setDetectAllEmojis(true);
detector.setDetectAllAppearances(true);

To check the status of a classifier at any time, for example smile:

detector.getDetectSmile();

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

detector.start();

To check if the detector is running,

bool running = detector.isRunning();

The start() connects to the specified camera to capture video frames then processes them and use the callback functions to notify of the captured frames, results and exceptions (if exist).

Stopping the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

detector.stop();

Be sure to always call stop() following a successful call to start() (including for example, in circumstances where you abort processing, such as in exception catch blocks). This ensures that resources held by the Detector instance are released.

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

detector.reset();