Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
Ultra Fast Lane Detection is a lightweight, fast, and accurate lane detection model. It can detect up to 4 lanes. It can be used in Advanced-driving-assistance-sysyems applications where lane detection is a crucial part. The model must be fast because it is the fundamental component of autonomous driving. The model is based on anchors and the detections are represented as clusters of points.
Developed by: Zequn Qin et.al.
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on two lane detection datasets: and .
For more information about training data check the .
Testing Details
Metrics
The evaluation was done on the validation set of both training datasets.
Metric
Value
Accuracy
95.87
F1-score
68.4
Results are taken from .
Technical Specifications
Input/Output Details
Input:
Name: input
Info: NCHW BGR un-normalized image
Output:
Name: output
Info: intermediate results (postprocessing is needed)
Model Architecture
Backbone: ResNet-18
Neck: The neck consists of additional layers for feature aggregation and global context learning. The neck takes the feature maps from the backbone and refines them for the lane detection task.
Head: The head is responsible for performing the final task-specific predictions. In this model, the head involves a row-based selecting mechanism for lane detection.
Throughput
Model variant: ultra-fast-lane-detection:culane-800x288
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
The RVC4 version of the model was quantized using the Driving dataset in HubAI.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
LaneDetectionParser that outputs message (at most 4 clusters of points, each cluster representing one lane).
Get parsed output(s):
while pipeline.isRuning():
parser_output: Clusters = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: