Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
YOLO v8 Nano is a cutting-edge, state-of-the-art (SOTA) model that builds upon the success of previous YOLO versions and introduces new features and improvements to further boost performance and flexibility. In this case, the model is specially trained on PPE (Person Protective Equipment) datasets to detect the person and its safety equipment, machinery, vehicles, and safety cones. The model detects if the person wears a hard hat, mask, and safety vest.
Developed by:
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on . The dataset consists of 2801 image samples with labels in YoloV8 format. There are 10 classes to detect from the dataset: 'Hardhat', 'Mask', 'NO-Hardhat', 'NO-Mask', 'NO-Safety Vest', 'Person', 'Safety Cone', 'Safety Vest', 'machinery', 'vehicle'.
Testing Details
Metrics
Unfortunately, no evaluation results are given for the specific model, so we are listing evaluation results done on COCO pretrained model.
Metric
Value
mAP@50-95
37.3
params
3.2M
Results are taken from .
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW BGR un-normalized image
Output:
Name: multiple (see NN archive)
Info: intermediate yolo results
Model Architecture
Backbone: Lightweight with CSPNet for efficient feature extraction.
Neck: PANet and FPN for multi-scale feature fusion.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
The RVC4 version of the model was quantized on 70-image subset of .
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
The model is automatically parsed by DAI and it outputs the message (bounding boxes, labels, and scores of the detected objects).
Get model output(s):
while pipeline.isRuning():
nn_output: dai.ImgDetections = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: