Our new model ZOO works with DepthAI V3. Find out more in our documentation.
1 Likes
YOLO-World is the next-generation YOLO detector, with a strong open-vocabulary detection capability and grounding ability.
License
GNU General Public License v3.0
Commercial use
Downloads
4561
Tasks
Object Detection
Model Types
ONNX
Model Variants
Name
Version
Available For
Created At
Deploy
RVC4
3 months ago
RVC4
4 days ago
RVC4
10 days ago
Model Details
Model Description
YOLO-World-L is the next-generation YOLO detector, with a strong open-vocabulary detection capability and grounding ability.
Developed by: Tencent AI Lab
Shared by:
Model type: Computer Vision
License:
Resources for more information:
To enhance versatility and flexibility, this model has been modified from the original implementation to accept two types of inputs: text embeddings (outputs from the CLIP text encoder) and images.
Training Details
Training Data
is a dataset, designed to spur object detection research with a focus on diverse objects in the Wild.
is a grounding dataset
Testing Details
Metrics
Results of the mAP are evaluated on LVIS dataset with the input resolution of 640×640. Results are taken from .
model
Pre-train Data
Size
APmini
APr
APc
APf
APval
APr
APc
APf
YOLO-Worldv2-L
O365+GoldG
640
33.0
22.6
32.0
35.8
26.0
18.6
23.0
32.6
Technical Specifications
Input/Output Details
Input:
Name: images
Info: NCHW BGR un-normalized image
Name: texts
Info: quantized text embedding produced by CLIP text encoder
Output:
Name: multiple (see NN archive)
Info: Unprocessed outputs of a multitude of detections
Model Architecture
Backbone: Darknet
Neck: includes PAN (Path Aggregation Network)
Head: head for bounding box regression and object embeddings
Please consult the for more information on model architecture.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Quantization
RVC4 version of the model was quantized using a custom dataset.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
Example
You can quickly run the model using our example.
The example demonstrates how to build a 1-stage DepthAI pipeline consisting open-vocabulary detection model.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run:
python main.py \
--class_names person car dog
For a more advanced example check out the example. It example demonstrates an advanced use of a custom frontend. On the DepthAI backend, it runs either the YOLO-World or YOLOE model on-device, with configurable class labels and confidence threshold — both controllable via the frontend.
The frontend, built using the @luxonis/depthai-viewer-common package, displays a real-time video stream with detections.
To try it out, select YOLOE model using app arguments and run: