Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
MobileNet-SSD is an efficient and lightweight single-shot detection (SSD) model designed to perform detection of 20 common objects (background vs. aeroplane, bicycle, bird, boat, bottle, bus, car, cat, chair, cow, dining table, dog, horse, motorbike, person, potted plant, sheep, sofa, train, and tv monitor).
Developed by:
Shared by:
Model type: Computer Vision
License:
Resources for more information:
Photo by Noelle Otto from
Training Details
Training Data
Model was trained on and fine-tuned on .
Testing Details
Metrics
Accuracy was tested on . See the for more information.
Dataset
mAP [%]
VOC2007
67.00
Technical Specifications
Input:
Name: data
Info: NCHW BGR image
Output:
Name: detection_out
Info: Array of detections (bounding boxes, scores, labels)
Model Architecture
MobileNet backbone with the Single Shot MultiBox Detector (SSD) framework.
Throughput
Model variant: mobilenet-ssd:300x300
Platform
Precision
Throughput (infs/sec)
Power Consumption (W)
RVC2
FP16
48.76
N/A
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
We also have a full . So check it out!
The model is automatically parsed by DAI and it outputs the message (bounding boxes, labels, and scores of the detected objects).
Get model output(s):
while pipeline.isRuning():
nn_output: dai.ImgDetections = parser_output_queue.get()
NOTE: During export, normalization is added to model input (scale_values: [127.5, 127.5, 127.5], mean_values: [127.5, 127.5, 127.5]).
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: