Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    YOLO-World-L is the next-generation YOLO detector, with a strong open-vocabulary detection capability and grounding ability.
    • Developed by: Tencent AI Lab
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    To enhance versatility and flexibility, this model has been modified from the original implementation to accept two types of inputs: text embeddings (outputs from the CLIP text encoder) and images.
    Training Details
    Training Data
    is a dataset, designed to spur object detection research with a focus on diverse objects in the Wild. is a grounding dataset
    Testing Details
    Metrics
    Results of the mAP are evaluated on LVIS dataset with the input resolution of 640×640. Results are taken from .
    modelPre-train DataSizeAPminiAPrAPcAPfAPvalAPrAPcAPf
    YOLO-Worldv2-LO365+GoldG64033.022.632.035.826.018.623.032.6
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: images
        • Info: NCHW BGR un-normalized image
      • Name: texts
        • Info: quantized text embedding produced by CLIP text encoder
    • Output:
      • Name: multiple (see NN archive)
        • Info: Unprocessed outputs of a multitude of detections
    Model Architecture
    • Backbone: Darknet
    • Neck: includes PAN (Path Aggregation Network)
    • Head: head for bounding box regression and object embeddings
    Please consult the for more information on model architecture.
    Throughput
    Model variant: yolo-world-l:640x640-host-decoding
    • Input shapes: [[1, 3, 640, 640], [1, 80, 512]] • Output shapes: [[1, 80, 80, 85], [1, 40, 40, 85], [1, 20, 20, 85]]
    • Params (M): 46.805 • GFLOPs: 90.345
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC4INT867.665.54
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Example
    You can quickly run the model using our example.
    The example demonstrates how to build a 1-stage DepthAI pipeline consisting open-vocabulary detection model. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python main.py \
    	--class_names person car dog
    
    YOLO-World-L
    YOLO-World is the next-generation YOLO detector, with a strong open-vocabulary detection capability and grounding ability.
    License
    GNU General Public License v3.0
    Commercial use
    Downloads
    407
    Tasks
    Object Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC415 days ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub