Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    MobileObjectLocalizer is a general-purpose object detection model developed by Google that can be used for any type of object. Unlike models such as YOLO which classifies objects among the predefined set of classes, MobileObjectLocalizer can detect any object. and does not assign any category.
    • Developed by: Google
    • Shared by:
    • Model type: Computer vision
    • License:
    Training Details
    No training details available.
    Testing Details
    No testing details available.
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: normalized_input_image_tensor
      • Info: NCHW BGR un-normalized image
    • Output:
      • Name: scores and bboxes
      • Info: Scores and bounding boxes that still need NMS.
    Model Architecture
    • Backbone: MobileNetV2 backbone with a 0.75 width-multiplier,
    • Head: SSDLite detection head.
    Throughput
    Model variant: mobile-object-localizer:192x192
    • Input shape: [1, 3, 192, 192] • Output shapes: [[1, 1, 738], [1, 738, 4]]
    • Params (M): 1.746 • GFLOPs: 0.175
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1667.74N/A
    RVC4FP16603.802.85
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/mobile-object-localizer:192x192"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • DetectionParser that outputs message with bounding boxes and scores.
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: ImgDetectionsExtended = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/mobile-object-localizer:192x192
    
    Mobile Object Localizer
    A class-agnostic mobile object detector.
    License
    Apache 2.0
    Commercial use
    Downloads
    269
    Tasks
    Object Detection
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC43 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub