Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    The DLC SuperAnimal keypoints model is intended to be used for pose estimation of quadruped (animals that have 4 legs) images taken from side view. It has 39 keypoints. Model can estimate the pose of more than 45 different species: from mice, rats, horses, dogs, and cats, to elephants and gazelles. Input image should be already cropped so it only contains the desired animal
    • Developed by: Mathis Lab
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    Quadruped-80K dataset which is built from over 85,000 images sourced from diverse laboratory settings and in-the-wild data. For more information check the .
    Testing Details
    Metrics
    Evaluation results are taken from .
    DatasetmAPRMSE
    AP-10K80.1111.30
    AnimalPose87.034.64
    Horse-1095.171.15
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: input
        • Info: NCHW BGR un-normalized image
    • Output:
      • Name: heatmaps
        • Info: Heatmaps for all 39 keypoints. Additional postprocessing is required (see the parser node).
    Model Architecture
    Please consult for more information on model architecture.
    Throughput
    Model variant: superanimal-landmarker:256x256
    • Input shape: [1, 3, 256, 256] • Output shape: [1, 64, 64, 39]
    • Params (M): 28.510 • GFLOPs: 10.268
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1613.75N/A
    RVC4INT8509.904.82
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking 70-image subset of dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/superanimal-landmarker:256x256"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • SuperAnimalParser that outputs message (at most 39 detected keypoints with scores).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: Keypoints = parser_output_queue.get()
    
    Example
    You can quickly run the model using our example.
    The example demonstrates how to build a 2-stage DepthAI pipeline for detecting animals and estimating their poses. The pipeline consists of object detector and pose estimation model. The example works on both RVC2 and RVC4. For realtime application you will need to use OAK4 cameras.
    It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python3 main.py
    
    SuperAnimal Landmarker
    Animal landmark model for quadrupeds.
    License
    GNU General Public License v3.0
    Commercial use
    Downloads
    912
    Tasks
    Keypoint Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC4About 1 year ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub