Our new model ZOO works with DepthAI V3. Find out more in our documentation.
Model Details
Model Description
This model is based on predefined (light variant) from LuxonisTrain and has been custom-trained to detect eyes. Specifically, the model is targeted to work best on crops of people faces.
Developed by: Luxonis
Shared by: Luxonis
Model type: Computer Vision
License: Apache 2.0
Resources for more information:
Training Details
Training Data
The dataset was used as the primary source of training images. In particular, we worked with the Menpo2D, 300W, and COFW subsets, restricted to semi-frontal images. Since this dataset provides keypoint annotations, we converted them into eye bounding boxes by selecting the relevant keypoints and generating axis-aligned bounding boxes around them.
To improve generalization, we also incorporated the dataset from Roboflow Universe.
In total, the final dataset comprised 19283 images.
Testing Details
Metrics
The model was trained on 512x512 resolution using letterbox resizing. On the testing set, which comprised of ~6k images, the model achieved a mean Average Precision (mAP) of 59.3%.
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW RGB format with images normalized to a range of 0-1.
Output:
Name: multiple (see NN archive)
Info: Unprocessed outputs of a multitude of detections
Model Architecture
Backbone: EfficientRep backbone
Neck: Rep-PAN neck
Head: Efficient decoupled head that is anchor-free
For more details, see the official documentation (light variat).
Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset based on mixture of validation and test images. The dataset has 128 images.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
The model is automatically parsed by DAI and it outputs the message (bounding boxes, labels, and scores of the detected objects).
Get model output(s):
while pipeline.isRuning():
nn_output: dai.ImgDetections = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: