Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    MiDaS v2.1 is a robust model for monocular depth estimation. It calculates the relative distance of objects from the camera and is reported to perform well across various scenarios. We implement here the Small version of the model, which is optimized for use on edge devices.
    • Developed by: René Ranftl et al.
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Photo by James Ranieri from
    Training Details
    Training Data
    The model was trained on a diverse set of data taken from the ReDWeb, DIML, Movies, MegaDepth, WSVD, TartanAir, ApolloScape, BlendedMVS, and IRS datasets. Training on depth data of different modalities was enabled by multi-objective optimization.
    Testing Details
    Metrics
    Test metrics were evaluated on various previously unseen datasets to demonstrate the model's generalizability:
    • DIW dataset test split,
    • ETH3D dataset where ground truth is available,
    • Sintel dataset where ground truth is available,
    • KITTI dataset validation split for depth estimation and the Eigen test split,
    • NYU dataset test split,
    • TUM dataset subset of humans in indoor environments.
    The authors used metrics aliged with the dataset ground truth:
    • Weighted Human Disagreement Rate (WHDR) For DIW,
    • Mean Absolute Relative Error (AbsRel) for ETH3D and Sintel,
    • Percentage of pixels with δ>1.25 for KITTI, NYU, and TUM.
    DatasetWHDRAbsRelδ>1.25
    DIW0.1344//
    ETH3D/0.1344/
    Sintel/0.3370/
    Kitti//29.27
    NYU//13.43
    TUM//14.53
    The results are taken from the page on GitHub.
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: image
        • Info: NCHW BGR image
    • Output:
      • Name: relative_depth
        • Info: 2D map of relative depth values
    Model Architecture
    • Backbone: EfficientNet Lite 3
    For more information, see the source code.
    Throughput
    Model variant: midas-v2-1:small-384x256
    • Input shape: [1, 3, 256, 384] • Output shape: [1, 256, 384]
    • Params (M): 16.564 • GFLOPs: 7.001
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP169.89N/A
    RVC4INT8461.415.10
    Model variant: midas-v2-1:small-256x192
    • Input shape: [1, 3, 192, 256] • Output shape: [1, 192, 256]
    • Params (M): 16.564 • GFLOPs: 3.501
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1620.24N/A
    RVC4INT8616.544.43
    Model variant: midas-v2-1:small-512x288
    • Input shape: [1, 3, 288, 512] • Output shape: [1, 288, 512]
    • Params (M): 16.564 • GFLOPs: 10.502
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP167.08N/A
    RVC4FP32106.533.28
    Model variant: midas-v2-1:small-512x384
    • Input shape: [1, 3, 384, 512] • Output shape: [1, 384, 512]
    • Params (M): 16.564 • GFLOPs: 14.002
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP164.80N/A
    RVC4INT8295.304.70
    Model variant: midas-v2-1:small-1024x768
    • Input shape: [1, 3, 768, 1024] • Output shape: [1, 768, 1024]
    • Params (M): 16.564 • GFLOPs: 56.009
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4INT870.594.78
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 models were quantized to int8 using the HubAI General dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/midas-v2-1:small-384x256"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • MapOutputParser that outputs message (depth map).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: Map2D = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/midas-v2-1:small-384x256 \
        -overlay
    
    MiDaS v2.1
    Monocular depth estimation model.
    License
    MIT
    Commercial use
    Downloads
    2088
    Tasks
    Depth Estimation
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC49 months ago
    RVC2, RVC49 months ago
    RVC2, RVC49 months ago
    RVC2, RVC49 months ago
    RVC2, RVC49 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub