Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    0 Likes
    Model Details
    Model Description
    EfficientViT is a transformer-based model with novel multi-scale linear attention. Unlike previous state-of-the-art transformer-based models, it is built only by using lightweight and hardware-efficient operations. The model delivers better performance on edge devices.
    • Developed by:
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    The classification model was trained on dataset, actually on ImageNet-1K with random initialization (300 epochs + 20 warmup epochs) using supervised learning.
    Testing Details
    Metrics
    The get more information about evaluation, please check .
    ModelResolutionImageNet Top-1 Acc.ImageNet Top-5 Acc.
    EfficientViT-B1224x22479.3994.35
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: input
        • Info: NCHW BGR un-normalized image
    • Output:
      • Name: output
        • Info: The non-softmaxed values for 1000 classes.
    Model Architecture
    • Backbone:
      • Composed of an input stem and four stages, each reducing spatial resolution while increasing channel depth.
      • EfficientViT modules are embedded in Stages 3 and 4 for efficient global and local feature extraction.
      • Downsampling is handled by MobileNet-like blocks (MBConv).
    • Head:
      • The head processes features from Stages 2, 3, and 4 using simple 1x1 convolutions and upsampling.
      • Features are fused through addition to reduce computational overhead.
      • Designed for simplicity and effectiveness, suitable for both segmentation and classification tasks.
    Throughput
    Model variant: efficientvit:b1-224x224
    • Input shape: [1, 3, 224, 224] • Output shape: [1, 1000]
    • Params (M): 9.095 • GFLOPs: 0.552
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP1635.55N/A
    RVC4FP16229.413.14
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/efficientvit:b1-224x224"
    )
    
    nn = pipeline.create(ParsingNeuralNetwork).build(
        <CameraNode>, model_description
    )
    
    Inspect model head(s):
    • ClassificationParser that outputs message (detected classes and scores).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: Classifications = parser_output_queue.get()
    
    Example
    You can quickly run the model using our script. It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool. To try it out, run:
    python3 main.py \
        --model luxonis/efficientvit:b1-224x224
    
    EfficientVIT
    Transformed-based classification model.
    License
    Apache 2.0
    Commercial use
    Downloads
    155
    Tasks
    Classification
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC46 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub