Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    QRDet is a YOLOv8 based convolutional neural network model designed for QR Code detection. It is highly effective and accurate even for more tricky images. We implement here the nano version of the model.
    • Developed by: Eric Cañas
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Photo by Erik Mclean from
    Training Details
    Training Data
    No data available.
    Testing Details
    Metrics
    No data available.
    Technical Specifications
    • Input:
      • Name: images
        • Info: BGR image
    • Output1:
      • Name: output1_yolov6r2
        • Info: Detection output 1
    • Output2:
      • Name: output2_yolov6r2
        • Info: Detection output 2
    • Output3:
      • Name: output3_yolov6r2
        • Info: Detection output 3
    Model Architecture
    • Backbone: CSPDarknet53
    • Neck: Path Aggregation Network (PANet)
    • Head: Anchor-free object detection head (pruned of concatenation)
    Consult the for more information.
    Throughput
    Model variant: qrdet:nano-512x288
    • Input shape: [1, 3, 288, 512] • Output shapes: [[1, 6, 36, 64], [1, 6, 18, 32], [1, 6, 9, 16]]
    • Params (M): 3.006 • GFLOPs: 1.631
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1634.52N/A
    RVC4INT8701.343.73
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Quantization
    RVC4 version of the model was quantized using a custom dataset. This was created by taking a 50-image subset of the dataset made publicli available over Roboflow by Capstone Project.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 library:
    pip install depthai
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/qrdet:nano-512x288"
    )
    
    nn = pipeline.create(dai.node.DetectionNetwork).build(
        <CameraNode>, model_description
    )
    
    The model is automatically parsed by DAI and it outputs the message (bounding boxes and scores of the detected QR codes).
    Get model output(s):
    while pipeline.isRuning():
        nn_output: dai.ImgDetections = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/qrdet:nano-512x288
    
    QRDet
    QR code detection model.
    License
    MIT
    Commercial use
    Downloads
    279
    Tasks
    Object Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC411 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub