Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
Depth Anything V2 is a robust model for monocular depth estimation.
V2 is an improved version of the model generating much finer depth predictions.
We implement here the most lightweight version of the model using the DINOv2 ViT-S encoder.
Moreover, we implement two versions of the model fine-tuned for metric depth estimation (MDE) of indoor and outdoor scenes.
Developed by: Lihe Yang et al.
Shared by: and
Model type: Computer Vision
License:
Resources for more information:
Photo by Sagar Sintan from
Training Details
Training Data
Custom dataset of 595K synthetic (depth-)labeled images and 62M real pseudolabeled images (see Table 1 in the for more details).
Additionally, the MDE models were fine-tuned either on indoor or outdoor metric depth estimation datasets.
Testing Details
Metrics
Zero-shot depth estimation was tested on various unseen datasets with absolute relative error (AbsRel) and threshold accuracy (δ1) metrics. The performance is comparable to V1.
Dataset
AbsRel
δ1
KITTI
0.078
0.936
NYU-D
0.053
0.973
DIODE
0.073
0.942
Moreover, the authors also introduced a new evaluation benchmark DA-2K offering a more extensive scene coverage and precision. V2 small model performed with accuracy of 95.3% outperforming the V1 model. See the for more information.
The authors provide no quantitative evaluation for MDE models. Please refer to Figure 15 in the for qualitative evaluation results.
Technical Specifications
Input:
Name: image
Info: NCHW BGR image
Output:
Name: relative_depth (or metric_depth for MDE models)
Info: Relative depth of the input image (or metric depth in meters for MDE models)
Model Architecture
DINOv2 (ViT-S) encoder for feature extraction, DPT decoder for depth regression. See the for more information.
Throughput
Model variant: depth-anything-v2:vit-s-mde-outdoors-336x252
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
while pipeline.isRuning():
parser_output: Map2D = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: