Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    Depth Anything V2 is a robust model for monocular depth estimation. V2 is an improved version of the model generating much finer depth predictions. We implement here the most lightweight version of the model using the DINOv2 ViT-S encoder. Moreover, we implement two versions of the model fine-tuned for metric depth estimation (MDE) of indoor and outdoor scenes.
    • Developed by: Lihe Yang et al.
    • Shared by: and
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Photo by Sagar Sintan from
    Training Details
    Training Data
    Custom dataset of 595K synthetic (depth-)labeled images and 62M real pseudolabeled images (see Table 1 in the for more details). Additionally, the MDE models were fine-tuned either on indoor or outdoor metric depth estimation datasets.
    Testing Details
    Metrics
    Zero-shot depth estimation was tested on various unseen datasets with absolute relative error (AbsRel) and threshold accuracy (δ1) metrics. The performance is comparable to V1.
    DatasetAbsRelδ1
    KITTI0.0780.936
    NYU-D0.0530.973
    DIODE0.0730.942
    Moreover, the authors also introduced a new evaluation benchmark DA-2K offering a more extensive scene coverage and precision. V2 small model performed with accuracy of 95.3% outperforming the V1 model. See the for more information.
    The authors provide no quantitative evaluation for MDE models. Please refer to Figure 15 in the for qualitative evaluation results.
    Technical Specifications
    • Input:
      • Name: image
        • Info: NCHW BGR image
    • Output:
      • Name: relative_depth (or metric_depth for MDE models)
        • Info: Relative depth of the input image (or metric depth in meters for MDE models)
    Model Architecture
    DINOv2 (ViT-S) encoder for feature extraction, DPT decoder for depth regression. See the for more information.
    Throughput
    Model variant: depth-anything-v2:vit-s-mde-outdoors-336x252
    • Input shape: [1, 3, 252, 336] • Output shape: [1, 252, 336]
    • Params (M): 24.710 • GFLOPs: 15.391
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4FP1610.553.72
    Model variant: depth-anything-v2:vit-s-560x420
    • Input shape: [1, 3, 420, 560] • Output shape: [1, 420, 560]
    • Params (M): 24.710 • GFLOPs: 53.596
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4FP161.413.44
    Model variant: depth-anything-v2:vit-s-mde-indoors-336x252
    • Input shape: [1, 3, 252, 336] • Output shape: [1, 252, 336]
    • Params (M): 24.710 • GFLOPs: 15.391
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4FP1610.543.27
    Model variant: depth-anything-v2:vit-s-336x252
    • Input shape: [1, 3, 252, 336] • Output shape: [1, 252, 336]
    • Params (M): 24.710 • GFLOPs: 15.391
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4FP1610.583.96
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/depth-anything-v2:vit-s-336x252"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • MapOutputParser that outputs message (depth map).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: Map2D = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/depth-anything-v2:vit-s-336x252 \
        -overlay
    
    Depth Anything V2
    Depth estimation model.
    License
    Apache 2.0
    Commercial use
    Downloads
    646
    Tasks
    Depth Estimation
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC49 months ago
    RVC49 months ago
    RVC49 months ago
    RVC49 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub