Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
YOLOv6l model is a convolutional neural network designed for object detection. It is highly effective and accurate, making it ideal for real-time applications on edge devices. It was trained on the COCO data set. It is capable of detecting objects of 80 classes.
Developed by: Meituan Vision AI Department
Shared by:
Model type: Object detection model
License:
Resources for more information:
Training Details
Training Data
is a large-scale object detection, segmentation, and captioning dataset.
Testing Details
Metrics
Results of the mAP and speed are evaluated on COCO val2017 dataset with the input resolution of 640×640. Results are taken from .
Model
mAP
Params (M)
52.5
58.5
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW BGR un-normalized image
Outputs:
Name: output1_yolov6r2
Info: Unprocessed output of the first channel
Name: output2_yolov6r2
Info: Unprocessed output of the second channel
Name: output3_yolov6r2
Info: Unprocessed output of the third channel
Model Architecture
Backbone: EfficientRep backbone
Neck: Rep-PAN neck
Head: Efficient decoupled head that is anchor-free
Please consult the for more information on model architecture.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
This was created by taking a full 128-image dataset.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
The model is automatically parsed by DAI and it outputs the message (bounding boxes, labels, and scores of the detected objects).
Get model output(s):
while pipeline.isRuning():
nn_output: dai.ImgDetections = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: