Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    PP-LiteSeg: a superior real-time semantic segmentation model for ADAS (advanced driving-assistance systems). The model can segment 19 different classes including cars, pedestrians, traffic signs, bicycles, roads, trees, buildings, and so on.
    • Developed by: Baidu Inc.
    • Shared by:
    • Model type: Computer vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    The model was trained on which is a large-scale dataset for urban segmentation. It contains 5,000 fine annotated images, which are further split into 2975, 500, and 1525 images for training. For more information about training data check the .
    Testing Details
    Metrics
    Model was evaluated on .
    MetricValue
    mIoU (%)72.00 %
    Results are taken from .
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: x
        • Info: NCHW BGR 0-255 image.
    • Output:
      • Name: bilinear_interp_v2_13.tmp_0
        • Info: Segmentation masks for every class.
    Model Architecture
    It consists of three modules:
    1. Encoder: Lightweight network
    2. Aggregation: Simple Pyramid Pooling Module (SPPM)
    3. Decoder: Flexible and Lightweight Decoder (FLD) and Unified Attention Fusion Module (UAFM)
    Please consult the for more information on model architecture.
    Throughput
    Model variant: pp-liteseg:1024x512
    • Input shape: [1, 3, 512, 1024] • Output shape: [1, 19, 512, 1024]
    • GFLOPs: 12.213
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP165.76N/A
    RVC4INT8164.085.03
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a HubAI Driving dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/pp-liteseg:1024x512"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • SegmentationParser that outputs message (segmentation mask for 19 classes).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: dai.ImgFrame = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/pp-liteseg:1024x512 \
        -overlay
    
    PP-LiteSeg
    Semantic segmentation model for advanced driving-assistance systems (ADAS).
    License
    Apache 2.0
    Commercial use
    Downloads
    173
    Tasks
    Semantic Segmentation
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC410 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub