Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    1+ Likes
    Model Details
    Model Description
    YuNet is a highly efficient face detection model. With only 75k parameters, it is designed to operate with minimal computational resources while maintaining exceptional balance between accuracy and speed, making it ideal for real-time applications on edge devices.
    • Developed by: Wei Wu et al.
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Photo by Sagar Sintan from
    Training Details
    Training Data
    The model was trained on . It is split into training, validation and test sets. WIDERFace is a face detection benchmark dataset, of which images are selected from the publicly available . They choose 32,203 images and label 393,703 faces with a high degree of variability in scale, pose, and occlusion as depicted in the sample images. WIDERFace dataset is organized based on 61 event classes. For each event class, they randomly select 40%/10%/50% data as training, validation, and testing sets. Based on the detection rate of EdgeBox (Zitnick & Dolla ́r, 2014), three levels of difficulty (i.e. Easy, Medium and Hard) are defined by incrementally incorporating hard samples.
    Testing Details
    Metrics
    AP metric was calculated for different groups of validation dataset based on the face-detection difficulty level (Easy, Medium, and Hard). The results are taken from the repository.
    MetricValue
    AP_easy0.887
    AP_medium0.871
    AP_hard0.768
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: image
        • Info: NCHW BGR image
    • Output:
      • Name: Multiple (please consult NN archive config.json)
        • Info: Classification scores, objectness scores, bounding boxes, and keypoints for a multitude of detections.
    Model Architecture
    • Backbone: Compact feature extraction backbone originating from MobileNet
    • Neck: Simplified feature pyramid network (FPN) neck
    • Head: Anchor-free head
    See the for more information on model architecture.
    Throughput
    Model variant: yunet:960x720
    • Input shape: [1, 3, 720, 960] • Output shapes: [[39615, 14], [39615, 2], [39615, 1]]
    • Params (M): 0.083 • GFLOPs: 0.943
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1616.66N/A
    RVC4INT8330.153.45
    Model variant: yunet:640x480
    • Input shape: [1, 3, 480, 640] • Output shapes: [[17610, 14], [17610, 2], [17610, 1]]
    • Params (M): 0.083 • GFLOPs: 0.419
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1638.61N/A
    RVC4INT8705.713.14
    Model variant: yunet:1280x960
    • Input shape: [1, 3, 960, 1280] • Output shapes: [[70500, 14], [70500, 2], [70500, 1]]
    • Params (M): 0.083 • GFLOPs: 1.678
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4INT8178.933.35
    Model variant: yunet:320x240
    • Input shape: [1, 3, 240, 320] • Output shapes: [[4385, 14], [4385, 2], [4385, 1]]
    • Params (M): 0.083 • GFLOPs: 0.105
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP16143.56N/A
    RVC4INT8694.172.46
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking a 50-image subset of the dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/yunet:640x480"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • YuNetParser that outputs message (bounding boxes with keypoints and confidence scores for every detected face).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: ImgDetectionsExtended = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/yunet:640x480
    
    You can also check out and or any other example from .
    YuNet
    Face detection model.
    License
    MIT
    Commercial use
    Downloads
    7864
    Tasks
    Object Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC49 months ago
    RVC2, RVC410 months ago
    RVC2, RVC410 months ago
    RVC2, RVC410 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub