Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    The MediaPipe Palm detection model is a single-shot detector optimized for mobile realtime applications. The model detects palms instead of entire hands since estimating bounding boxes of rigid objects like palms and fists is significantly simpler than detecting hands with articulated fingers.
    • Developed by: Google
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    For training the palm detector, in-the-wild dataset was used, which is sufficient for localizing hands and offers the highest variety in appearance.
    In-the-wild dataset contains 6K images of a large variety, e.g. geographical diversity, various lighting conditions, and hand appearance. The limitation of this dataset is that it doesn’t contain complex articulation of hands.
    Testing Details
    Metrics
    Evaluation results are taken from .
    MetricValue
    Average Precision95.7%
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: input_1
        • Info: NCHW BGR un-normalized image
    • Output:
      • Name: multiple (see NN archive)
        • Info: Hand bounding boxes and classification scores
    Model Architecture
    BlazePalm single-shot detector and encoder-decoder feature extractor similar to FPN.
    Please consult for more information on model architecture.
    Throughput
    Model variant: mediapipe-palm-detection:192x192
    • Input shape: [1, 3, 192, 192] • Output shapes: [[1, 2016, 18], [1, 2016, 1]]
    • Params (M): 1.133 • GFLOPs: 0.354
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1651.18N/A
    RVC4INT8621.463.01
    Model variant: mediapipe-palm-detection:128x128
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1669.41N/A
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking 30 images from the web where the hands of the people are present.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/mediapipe-palm-detection:192x192"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • MPPalmDetectionParser that outputs message (bounding boxes of detected hands with confidence scores).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: ImgDetectionsExtended = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/mediapipe-palm-detection:192x192
    
    Moreover, you can also check and examples.
    MediaPipe Palm Detection
    Palm detection model.
    License
    Apache 2.0
    Commercial use
    Downloads
    693
    Tasks
    Object Detection
    Model Types
    IR
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC211 months ago
    RVC2, RVC411 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub