Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
Mobile LSD (M-LSD) model is a real-time and lightweight line segment detector for resource-constrained environments. Detected line segments are crucial visual features in low-level vision, which provide fundamental information to higher-level vision tasks such as pose estimation, structure from motion, 3D reconstruction, image matching, wireframe to image translation, and image rectification.
Developed by: NAVER/LINE Vision
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
was used for training. It consists of 5,000 training and 462 test images of man-made environments.
For more information about training data check the .
Testing Details
Metrics
For evaluation and were used. YorkUrban dataset has 102 test images. Evaluated metrics include heatmap-based metric F^H , structural average precision (sAP), and line matching average precision (LAP).
Wireframe evaluation
Model
F^H
sAP^5
LAP
M-LSD
80.00
56.40
61.50
YorkUrban evaluation
Model
F^H
sAP^5
LAP
M-LSD
64.20
24.60
30.70
Results are taken from .
Technical Specifications
Input/Output Details
Input:
Name: input
Info: 0-255 BGR un-normalized image.
Output:
Name: multiple (see NN archive)
Info: intermediate outputs that will be processed by MLSD parser.
Model Architecture
The M-LSD and M-LSD-tiny models are lightweight encoder-decoder architectures. They use MobileNetV2-based encoder networks. The decoder network combines blocks of types A, B, and C, with block type A for feature map concatenation and upscaling, block type B for residual 3x3 convolutions, and block type C for dilated and 1x1 convolutions.
Please consult the for more information on model architecture.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
This was created by taking a 100-image subset of and datasets.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
while pipeline.isRuning():
parser_output: Lines = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: