Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
The Fast Segment Anything Model (FastSAM) is a novel, real-time CNN-based solution for the Segment Anything task.
This task is designed to segment any object within an image based on various possible user interaction prompts.
FastSAM significantly reduces computational demands while maintaining competitive performance, making it a practical choice for a variety of vision tasks.
We implement here the x version of the model.
Developed by: Ultralytics
Shared by:
Model type: Computer Vision
License:
Resources for more information:
Training Details
Training Data
The model was trained on only 2%
of the SA-1B dataset .
Testing Details
Metrics
Results of the bbox and mask AR@1000 are evaluated on LVIS v1 dataset. Results are taken from .
Model
bbox AR@1000
mask AR@1000
57.1
49.7
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW BGR un-normalized image
Output:
Name: multiple (see NN archive)
Info: Unprocessed outputs of a multitude of detections, masks and protos
Model Architecture
Backbone: CSPDarknet53
Head: Anchor-free object segmentation head from YOLOv8 seg model (pruned of concatenation)
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
This was created by taking a full 128-image dataset.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
FastSAMParser that outputs message (mask of each of the segmented objects).
Get parsed output(s):
while pipeline.isRuning():
parser_output: SegmentationMask = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: