SDK Developer Guide Release 3.2
The purpose of the SDK is to detect facial expressions and their underlying emotions, appearance and emojis from facial images. The SDK is distributed as a tarball. Its included assemblies enable integration with C++ applications and the data folder required by API in runtime:
Watch the video tutorial below to help you get started on how to use the SDK:
Click on the download link from the table for the desired architecture:
For ARM Architecture: A beta build of SDK for ARM architecture is now available. Use it to build apps for embedded platforms such as raspberry pi.
Extract the archive:
tar -xzvf affdex-cpp-sdk-3.2-2893-linux-64bit.tar.gz -C $HOME/affdex-sdk
To compile in Linux, you must have the header files for libcurl, libopenssl, and libuuid. The packages are typically available in your package manager.
Ubuntu : sudo apt-get install libcurl-dev uuid-dev
sudo apt-get install libcurl-dev uuid-dev
CentOS : sudo yum install libcurl-devel.x86_64 libuuid-devel.x86_64
sudo yum install libcurl-devel.x86_64 libuuid-devel.x86_64
The tarball includes the header and library files that will be needed by your application. Amend your application compilation process to define the location of the affdex-sdk “include” and “lib” directories.
For example, main.cpp initializes an instance of affdex::FrameDetector
int main(int argc, char ** argsv)
To compile main.cpp file.
g++ main.cpp -o main -std=c++11 -I$HOME/affdex-sdk/include -L$HOME/affdex-sdk/lib -l affdex-native
For more complex applications, you might want to use CMake to generate the makefiles for compiling your application. An example of how to configure CMake can be found here.
In addition to libaffdex-native.so, the SDK lib folder contains FFmpeg libraries which are required by the VideoDetector [c++] in runtime for the video decoding. FFmpeg is an open source library for video decoding licensed under LGPL. Also, the SDK uses access to the internet when available to communicate anonymized usage data.
Facial images can be captured from different sources. For each of the different sources, the SDK defines a detector class that can handle processing images acquired from that source:
Sample applications for processing videos and connecting to the camera are available for cloning on our GitHub repository.
As of v3.1, the SDK exposes a parameter max_faces in the detectors constructor to specify the maximum number of faces to look for in an image. For the realtime use cases, to achieve a high accuracy and processing throughput (20+ processed frames per second), the SDK requires a CPU thread per face.
On a recent dual core machine, we can track up to 3 people in parallel with all the facial expressions, emotions and appearance metrics enabled.
If the number of faces tracked is greater than the number of available CPU threads on the machine, they will all be tracked, but at a cost of the processing frame rate. Therefore, make sure to plan for providing enough hardware power for the number of faces they are expecting to track with each camera.