Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
eWaSR is an embedded-compute-ready maritime obstacle detection network. The model can segment 3 classes: sky, water, and obstacles. It can be used in applications where the safe navigation of autonomous surface vehicles is required.
Developed by: Matija Teršek et al.
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on , created for training deep USV obstacle detection models.
For more information about training data check the .
Testing Details
Metrics
Images from the are categorized into two categories: Overall and Danger Zone. The latter represents images where obstacles are less than 15m away which pose an immediate threat to the vehicle.
Metric
Overall
Danger Zone (< 15m)
F1 score
92.56
78.09
Results are taken from .
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW BGR 0-255 image.
Output:
Name: prediction
Info: Segmentation masks for every class.
Model Architecture
eWaSR architecture follows the encoder, feature mixer, and decoder architecture. The backbone features of intermediate encoder layers are resized, concatenated and processed by a lightweight scale-aware semantic extraction module (LSSE). The semantically-enriched features are injected into higher-layer backbone features by semantic-injection modules - SIM. The resulting features are then concatenated with the IMU mask and passed to the segmentation head.
Please consult the for more information on model architecture.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
This was created by taking 40-image subset of .
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
SegmentationParser that outputs message (segmentation mask for 19 classes).
Get parsed output(s):
while pipeline.isRuning():
parser_output: dai.ImgFrame = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: