Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
PP-LiteSeg: a superior real-time semantic segmentation model for ADAS (advanced driving-assistance systems). The model can segment 19 different classes including cars, pedestrians, traffic signs, bicycles, roads, trees, buildings, and so on.
Developed by: Baidu Inc.
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on which is a large-scale dataset for urban segmentation. It contains 5,000 fine annotated images, which are further split into 2975, 500, and 1525 images for training.
For more information about training data check the .
Testing Details
Metrics
Model was evaluated on .
Metric
Value
mIoU (%)
72.00 %
Results are taken from .
Technical Specifications
Input/Output Details
Input:
Name: x
Info: NCHW BGR 0-255 image.
Output:
Name: bilinear_interp_v2_13.tmp_0
Info: Segmentation masks for every class.
Model Architecture
It consists of three modules:
Encoder: Lightweight network
Aggregation: Simple Pyramid Pooling Module (SPPM)
Decoder: Flexible and Lightweight Decoder (FLD) and Unified Attention Fusion Module (UAFM)
Please consult the for more information on model architecture.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a HubAI Driving dataset.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
SegmentationParser that outputs message (segmentation mask for 19 classes).
Get parsed output(s):
while pipeline.isRuning():
parser_output: dai.ImgFrame = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: