Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    YOLO-P is a panoptic driving perception network that performs traffic object detection, drivable area segmentation, and lane detection simultaneously. It is composed of one encoder for feature extraction and three decoders to handle the specific tasks.
    • Developed by: Dong Wu et al.
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    Trained and evaluated on the dataset. The BDD100K dataset has three parts, a training set with 70K images, a validation set with 10K images, and a test set with 20K images. The evaluation of the model was conducted in the validation set, since the label of the test set is not public.
    Testing Details
    Metrics
    These results showcase the performance of a multi-task model on the BDD100K dataset across three key autonomous driving tasks: Traffic Object Detection, Drivable Area Segmentation, and Lane Detection. The results are obtained from the original .
    Traffic Object Detection Results
    DatasetRecall (%)mAP50 (%)
    BDD100K89.2 (+1.0)76.5 (-0.4)
    Drivable Area Segmentation Results
    ModelmIOU (%)
    BDD100K91.5 (-0.1)
    Lane Detection Results
    DatasetAccuracy (%)IOU (%)
    BDD100K70.50 (+0.6)26.20 (-0.3)
    The values in parentheses represent confidence intervals or uncertainty measures, indicating the statistical reliability of each reported metric. These intervals provide a range within which the true value is likely to fall, accounting for variability in the measurements.
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: images
        • Info: NCHW, BGR un-normalized image
    • Output:
      • Name: output1_yolop
        • Info: NCHW, first detection head
      • Name: output2_yolop
        • Info: NCHW, second detection head
      • Name: output3_yolop
        • Info: NCHW, third detection head
      • Name: drive_area_seg
        • Info: NCHW, output of the drivable area segmentation head
      • Name: lane_line_seg
        • Info: NCHW, output of the lane line segmentation head
    Model Architecture
    YOLOP is a single-shot network which contains one shared encoder and three subsequent decoders to solve specific tasks. It comprises of the Backbone witch is the encoder, the Neck which is used to fuse the features generated by the backbone, and the three heads, Detect Ηead, Drivable Area Segment Ηead, and Lane Line Segment Ηead. See the for more information.
    Throughput
    Model variant: yolo-p:bdd100k-320x320
    • Input shape: [1, 3, 320, 320] • Output shapes: [[1, 18, 40, 40], [1, 18, 20, 20], [1, 18, 10, 10], [1, 2, 320, 320], [1, 2, 320, 320]]
    • Params (M): 7.936 • GFLOPs: 4.051
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1615.61N/A
    RVC4INT8374.064.37
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/yolo-p:bdd100k-320x320"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • YOLOExtendedParser that outputs message (detected cars).
    • SegmentationParser that outputs message (segmentation mask of drivable area).
    • SegmentationParser that outputs message (segmentation mask of lane lines).
    The model is multi-headed. You can set up the queues as follows:
    detection_parser_output_queue = nn_with_parser.getOutput(0).createOutputQueue()
    da_seg_parser_output_queue = nn_with_parser.getOutput(1).createOutputQueue()
    ll_seg_parser_output_queue = nn_with_parser.getOutput(2).createOutputQueue()
    
    Get parsed output(s):
    while pipeline.isRuning():
        detection_parser_output: ImgDetectionsExtended = detection_parser_output_queue.get()
        da_seg_parser_output: SegmentationMask = da_seg_parser_output_queue.get()
        ll_seg_parser_output: SegmentationMask = ll_seg_parser_output_queue.get()
    
    Example
    You can quickly run the model using our example.
    The example demonstrates how to build a 1-stage DepthAI pipeline consisting a road segmentation model. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python3 main.py
    
    YOLO-P
    You Only Look at Once for Panoptic driving Perception
    License
    MIT
    Commercial use
    Downloads
    471
    Tasks
    Semantic Segmentation
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC48 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub