Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    The MediaPipe Hand landmark model performs precise keypoint localization of 21 3D hand-knuckle coordinates inside the detected hand regions via regression, that is direct coordinate prediction. The model learns a consistent internal hand pose representation and is robust even to partially visible hands and self-occlusions.
    • Developed by: Google
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    For training the hand landmarker 3 different datasets were used:
    • In-the-wild dataset contains 6K images of a large variety, e.g. geographical diversity, various lighting conditions, and hand appearance. The limitation of this dataset is that it doesn’t contain complex articulation of hands.
    • In-house collected gesture dataset: This dataset contains 10K images that cover various angles of all physically possible hand gestures. The limitation of this dataset is that it’s collected from only 30 people with limited variation in background. The in-the-wild and in-house datasets are great complements to each other to improve robustness.
    • Synthetic dataset: To even better cover the possible hand poses and provide additional supervision for depth, we render a high-quality synthetic hand model over various backgrounds and map it to the corresponding 3D coordinates.
    Testing Details
    Metrics
    Evaluation results are taken from .
    MetricValue
    Mean Squared Error11.83
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: input_1
        • Info: NCHW BGR un-normalized image
    • Output:
      • Name: multiple (see NN archive)
        • Info: Hand keypoints, handedness (left or right hand), confidence score
    Model Architecture
    The model has a shared feature extractor and 3 separate heads for 3 outputs (hand landmarks, hand presence, and handedness). Each head is trained by correspondent datasets.
    Please consult for more information on model architecture.
    Throughput
    Model variant: mediapipe-hand-landmarker:224x224
    • Input shape: [1, 3, 224, 224] • Output shapes: [[1, 63], [1, 1], [1, 1], [1, 63]]
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1660.44N/A
    RVC4INT8720.882.60
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking 50-image subset of dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/mediapipe-hand-landmarker:224x224"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • KeypointParser that outputs message (detected hand keypoints).
    • RegressionParser that outputs message (score).
    • RegressionParser that outputs message (handedness).
    The model is multi-headed. You can set up the queues as follows:
    keypoints_parser_output_queue = nn.getOutput(0).createOutputQueue()
    score_parser_output_queue = nn.getOutput(1).createOutputQueue()
    handedness_parser_output_queue = nn.getOutput(2).createOutputQueue()
    
    Get parsed output(s):
    while pipeline.isRuning():
        keypoints_parser_output: Keypoints = keypoints_parser_output_queue.get()
        score_parser_output: Predictions = score_parser_output_queue.get()
        handedness_parser_output: Predictions = handedness_parser_output_queue.get()
    
    
    Example
    You can quickly run the model using our example.
    The example demonstrates how to build a 2-stage DepthAI pipeline consisting of a palm detection model and a hand landmark model. It automatically downloads the models, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python3 main.py
    
    MediaPipe Hand Landmarker
    Hand Landmark model.
    License
    Apache 2.0
    Commercial use
    Downloads
    1299
    Tasks
    Keypoint Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC3, RVC49 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub