Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    1+ Likes
    Model Details
    Model Description
    YOLOv10n model is a convolutional neural network designed for object detection. It introduces a new approach to real-time object detection, addressing both the post-processing and model architecture deficiencies found in previous YOLO versions. By eliminating non-maximum suppression (NMS) and optimizing various model components, YOLOv10 achieves state-of-the-art performance with significantly reduced computational overhead. Extensive examples demonstrate its superior accuracy-latency trade-offs across multiple model scales. It was trained on the COCO data set. It is capable of detecting objects of 80 classes.
    • Developed by: Tsinghua University
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    is a large-scale object detection, segmentation, and captioning dataset.
    Testing Details
    Metrics
    Results of the mAP and speed are evaluated on COCO dataset with the input resolution of 640×640. Results are taken from .
    ModelAP
    38.5
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: image
        • Info: NCHW BGR un-normalized image
    • Outputs:
      • Name: multiple (see NN archive)
        • Info: Unprocessed outputs of a multitude of detections
    Model Architecture
    • Backbone: an enhanced version of CSPNet (Cross Stage Partial Network)
    • Neck: includes PAN (Path Aggregation Network)
    • Head: One-to-One head
    Please consult the for more information on model architecture.
    Throughput
    Model variant: yolov10-nano:coco-512x288
    • Input shape: [1, 3, 288, 512] • Output shapes: [[1, 85, 36, 64], [1, 85, 18, 32], [1, 85, 9, 16]]
    • Params (M): 2.299 • GFLOPs: 1.435
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1628.73N/A
    RVC4INT8618.983.08
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking a full 128-image dataset.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 library:
    pip install depthai
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/yolov10-nano:coco-512x288"
    )
    
    nn = pipeline.create(dai.node.DetectionNetwork).build(
        <CameraNode>, model_description
    )
    
    The model is automatically parsed by DAI and it outputs the message (bounding boxes, labels, and scores of the detected objects).
    Get model output(s):
    while pipeline.isRuning():
        nn_output: dai.ImgDetections = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/yolov10-nano:coco-512x288
    
    YOLOv10 Nano
    General object detection model
    License
    GNU Affero General Public License v3.0
    Commercial use
    Downloads
    174
    Tasks
    Object Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC410 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub