Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
Emotion-Recogniton-8-ENet model is based on the EfficientNet convolutional neural network model. It can recognize 8 emotions expressed with face gestures. 8 emotions include anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise.
Developed by: A. Savchenko
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on dataset. The dataset contains 3.31 million images of 9131 subjects (identities), with an average of 362.6 images for each subject.
Testing Details
Metrics
The evaluation is performed on
Metric
Value
Accuracy (8 classes)
63.13%
Accuracy (7 classes)
66.51%
For more information please check the .
Technical Specifications
Input/Output Details
Input:
Name: input
Info: NCHW BGR 0-255 image.
Output:
Name: output
Info: Prediction scores for every emotion.
Model Architecture
The general Efficient-net network consists of:
Stem: Initial layer with a standard convolution followed by a batch normalization and a ReLU6 activation.
Body: Consists of a series of MBConv blocks with different configurations. Each block includes depthwise separable convolutions and squeeze-and-excitation layers.
Head: Includes a final convolutional block, followed by a global average pooling layer.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
This was created by taking 40 face-cropped images from different datasets available on the web.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
ClassificationParser that outputs message (predicted emotion classes and scores.).
Get parsed output(s):
while pipeline.isRuning():
parser_output: Classifications = parser_output_queue.get()
Example
You can quickly run the model using our example.
The example demonstrates how to build a 2-stage DepthAI pipeline consisting of a face detection model and an emotion recognition model.
It automatically downloads the models, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.