classdoc: Detector [ObjectiveC]

Affdex SDK also allows you to process images rather than video. Images can be discrete, or unrelated, or they can be frames extracted from video in which case they’re continuous, or related, images. If you have a library of facial images captured independently of one another then you would use the discrete option. A scenario illustrating the use of continuous image processing is when your app may record faces on a lengthy basis and, for storage efficiency purposes, you store only one frame per second rather than the standard 30 FPS (the default for iPhones). The resulting images are related and saving one frame per second provides you with sufficient granularity for your app’s purpose. Processing either discrete or continuous images does not entail the use of the device camera so you can use Affdex SDK to process images while your device camera is in use.

Creating the detector

- (id)initWithDelegate:(id <AFDXDetectorDelegate>)delegate discreteImages:(BOOL)discrete maximumFaces:(NSUInteger)maximumFaces faceMode:(FaceDetectorMode)faceMode;

Like the other methods, this initialization method also takes a reference to an object which adheres to the AFDXDetectorDelegate protocol, the maximum number of faces to detect, and the face mode (LARGE_FACES or SMALL_FACES).

The second parameter discrete is a flag that the detector uses to determine whether discrete images will be used or not. It should be set to YES.

Configuring the detector

Using the AFDXDetectorDelegate Protocol

The SDK communicates results to your app via the AFDXDetectorDelegate protocol. Here are the methods that your app will need to know about.

- (void)detector:(AFDXDetector *)detector didStartDetectingFace:(AFDXFace *)face;

This method is called in your code and signals when the detector detects a new face that has come into view. It is often used in conjunction with detector:didStopDetectingFace:. The implementation of this delegate method is optional.

- (void)detector:(AFDXDetector *)detector didStopDetectingFace:(AFDXFace *)face;

This method is called in your code and signals when the detector no longer detects a particular face. This is the converse of detector:didStartDetectingFace:. Together, the two methods provide signals of when a face comes into or goes out of view. The implementation of this delegate method is also optional.

- (void)detector:(AFDXDetector *)detector hasResults:(NSMutableDictionary *)faces forImage:(NSImage *)image atTime:(NSTimeInterval)time;

This method is called in your code when the detector has processed a video frame from the camera, from a video file, or via a static image. There are four parameters sent to this method:

  1. A reference to the detector.
  2. A dictionary of AFDXFace objects corresponding to the faces in the image. They key for each object is the face identifier. If nil is passed, then this is an unprocessed frame.
  3. A reference to the image.
  4. A timestamp (relative to 0) representing the point in time that the image was processed,

For camera and video cases, the number of frames that are processed to the detector is usually a subset of the available frames.

- (void)detectorDidFinishProcessing:(AFDXDetector *)detector;

This method is called in your code when the detector has finished processing a video file. (It is not called when using the camera or static images.) The implementation of this delegate method is optional.


### Choosing the classifiers
The next step is to turn on the detection of the [metrics](/metrics) needed. By default, all classifiers are disabled. Here, we’ll turn on a few classifiers. For example:

```objc
// turning on a few emotions
detector.joy = YES;
detector.anger = YES;

// turning on a few expressions
detector.smile = YES;
detector.browRaise = YES;
detector.browFurrow = YES;

// turning on a few emojis
detector.smiley = YES;
detector.kissing = YES; // etc

To turn on or off the detection of all expressions, emotions or emojis:

[detector setDetectAllEmotions:YES];
[detector setDetectAllExpressions:YES];
[detector setDetectEmojis:YES];

Initializing the detector

After a detector is configured using the methods above, the detector initialization can be triggered by calling the start method:

NSError *error = [detector start];

Check the return value for any error that may have occurred during the start process. If everything is fine, then nil will be returned and the detector comes to life.

Processing frames

After successful initialization, the following method can be used to process images for detection:

- (void)processImage:(NSImage *)facePicture atTime:(NSTimeInterval)time;

Getting detection results

When the array of faces comes into the delegate method, your application can interpret the data as it sees fit. Here’s a code example:

// Convenience method to work with processed images.
- (void)processedImageReady:(AFDXDetector *)detector image:(NSImage *)image faces:(NSDictionary *)faces atTime:(NSTimeInterval)time;
{
    for (AFDXFace *face in [faces allValues])
    {
        if (isnan(face.expressions.smile) == NO)
        {
            // do something with the value...
        }
        if (isnan(face.expressions.browRaise) == NO)
        {
            // do something with the value...
        }
        // handle other metrics here
        . . .
    }
}

// Convenience method to work with unprocessed images.
- (void)unprocessedImageReady:(AFDXDetector *)detector image:(NSImage *)image atTime:(NSTimeInterval)time;
{
    // This is an unprocessed frame... do something with it...
}

// The delegate method of the AFDXDetectorDelegate protocol.
- (void)detector:(AFDXDetector *)detector hasResults:(NSMutableDictionary *)faces forImage:(NSImage *)image atTime:(NSTimeInterval)time;
{
    if (nil == faces)
    {
        [self unprocessedImageReady:detector image:image atTime:time];
    }
    else
    {
        [self processedImageReady:detector image:image faces:faces atTime:time];
    }
}

In the above code snippet, the delegate method will call one of two instance methods depending on the value of the faces dictionary. The unprocessedImageReady:image:atTime: method receives unprocessed frames while the processedImageReady:image:faces:atTime: method receives the processed ones. In that method, you can check the metric values for all AFDXFace objects in the dictionary. The value extracted from the metric should be checked for NaN (not a number) which indicates that the detector has not been instructed to detect that emotion or expression.

For multiple face detection, it is important to keep in mind that each face has its own face identifier (a unique number) which is tracked as long as that face remains in the image and does not “cross over” another face. If one face’s bounding box collides with another face’s bounding box from one frame to the next (in video or non-discrete image mode), the face tracker may assign a different face ID to those faces.

Stopping the detector

At the end of the interaction with the detection. Stopping the detector can be done as follows:

NSError *error = [detector stop];

The processing state can be reset. This method resets the context of the video frames. Additionally Face IDs and Timestamps are set to zero (0):

NSError *error = [detector reset];