Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    Objectron is originally a 2-stage model consisting of object detector and keypoint regressor parts. In this model card you can find the second part - 3D keypoint regressor. For the object detector you can use whatever model you like from our model ZOO e.g. Yolo or Mobile-SSD. The objectron is very fast and can predict 9 keypoints for 3D bounding box.
    • Developed by: Google
    • Shared by:
    • Model type: Computer vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    The model was trained on . The dataset consists of 15K annotated video clips supplemented with over 4M annotated images collected from a geo-diverse sample.
    Testing Details
    Metrics
    Unfortunately, no evaluation results are given for the specific model, so we are listing evaluation results done on COCO pretrained model.
    MetricChairCupCameraShoe
    AP@0.50.850.540.800.66
    MPE0.050.050.040.04
    MPE - mean pixel error
    Results are taken from .
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: input
        • Info: NCHW BGR un-normalized image
    • Output:
      • Name: multiple (see NN archive)
        • Info: predicted keypoints and objectness score.
    Model Architecture
    • Backbone: The backbone is used for feature extraction from the input image and is based on EfficientNet-Lite, which processes an image to generate a feature map.
    • Neck: Consists of the feature map encoded into a 7x7x1152 embedding vector through the EfficientNet-Lite backbone.
    • Head: The head performs the final task, which in this case is the regression of the 2D keypoints of a 3D bounding box from the feature maps. The output keypoints are then used with the EPnP algorithm to lift them into 3D space.
    Please check the for more information.
    Throughput
    Model variant: objectron:cup-224x224
    • Input shape: [1, 3, 224, 224] • Output shapes: [[1, 18], [1, 1]]
    • Params (M): 2.590 • GFLOPs: 0.348
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1660.36N/A
    RVC4FP16634.383.69
    Model variant: objectron:camera-224x224
    • Input shape: [1, 3, 224, 224] • Output shapes: [[1, 18], [1, 1]]
    • Params (M): 2.590 • GFLOPs: 0.348
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1660.63N/A
    RVC4FP16625.423.14
    Model variant: objectron:chair-224x224
    • Input shape: [1, 3, 224, 224] • Output shapes: [[1, 18], [1, 1]]
    • Params (M): 2.590 • GFLOPs: 0.348
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1661.45N/A
    RVC4FP16632.153.27
    Model variant: objectron:sneakers-224x224
    • Input shape: [1, 3, 224, 224] • Output shapes: [[1, 18], [1, 1]]
    • Params (M): 2.590 • GFLOPs: 0.348
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1661.07N/A
    RVC4FP16631.922.90
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/objectron:camera-224x224"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • KeypointParser that outputs message (detected keypoints).
    • RegressionParser that outputs message (score).
    The model is multi-headed. You can set up the queues as follows:
    keypoints_parser_output_queue = nn.getOutput(0).createOutputQueue()
    score_parser_output_queue = nn.getOutput(1).createOutputQueue()
    
    Get parsed output(s):
    while pipeline.isRuning():
        keypoints_parser_output: Keypoints = keypoints_parser_output_queue.get()
        score_parser_output: Predictions = score_parser_output_queue.get()
    
    Example
    You can quickly run the model using our example.
    This example demonstrates how to perform 3D object detection using the model. The model can predict 3D bounding box of the foreground object in the image. For general object detection we use model. The pipeline is a standard 2-stage pipeline with detection and 3D object detection models. The example works on both RVC2 and RVC4. can predict 3D bounding boxes for chairs, cameras, cups, and shoes. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python3 main.py
    
    Objectron
    3D bounding box prediction model.
    License
    Apache 2.0
    Commercial use
    Downloads
    771
    Tasks
    Keypoint Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC3, RVC48 months ago
    RVC2, RVC3, RVC48 months ago
    RVC2, RVC3, RVC48 months ago
    RVC2, RVC3, RVC48 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub