Luxonis
    Our new model ZOO works with DepthAI V3. Find out more in our documentation.
    1+ Likes
    Model Details
    Model Description
    Gaze estimation model is a lightweight, simple, and handmade convolutional neural network. The model has 3 inputs: cropped left-eye image, cropped right-eye image, and 3 values (pitch, yaw, roll) from head pose estimation model. The network outputs 3-D vector corresponding to the direction of a person's gaze in a Cartesian coordinate system in which Z-axis is directed from person's eyes (mid-point between left and right eyes' centers) to the camera center, Y-axis is vertical, and X-axis is orthogonal to both z,y axes so that (x,y,z) constitute a right-handed coordinate system.
    • Developed by: OpenVINO
    • Shared by:
    • Model type: Computer Vision
    • License:
    • Resources for more information:
    Training Details
    Training Data
    The training dataset is not provided. For the evaluation dataset two random held out inviduals from an OpenVINO internal dataset containing images of 60 people with different gaze directions are used.
    Testing Details
    Metrics
    The accuracy of gaze direction prediction is evaluated through the use of Mean Absolute Error of angle (in degrees) between the ground truth and predicted gaze direction.
    AngleMean ± std. of absolute error
    OV Internal Dataset6.95 ± 3.58
    Technical Specifications
    Input/Output Details
    • Input:
      • Name: Multiple (please consult NN archive config.json)
    • Output:
      • Name: Identity
        • Info: 3D coordinates of gaze estimation.
    Model Architecture
    The model is a simple, custom VGG-like convolutional neural network with 3 inputs and 1 output.
    Throughput
    Model variant: gaze-estimation-adas:60x60
    • Input shapes: [[1, 3, 60, 60], [1, 3, 60, 60], [1, 3]] • Output shape: [1, 1, 1, 3]
    • Params (M): 1.882 • GFLOPs: 0.070
    PlatformPrecisionThroughput (infs/sec)Power Consumption (W)
    RVC2FP16564.65N/A
    RVC4FP16656.252.45
    * Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
    * Parameters and FLOPs are obtained from the package.
    Utilization
    Models converted for RVC Platforms can be used for inference on OAK devices. DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)). Below, we present the most crucial utilization steps for the particular model. Please consult the docs for more information.
    Install DAIv3 and depthai-nodes libraries:
    pip install depthai
    pip install depthai-nodes
    
    This model should work together with face detection and head pose estimation models.
    Define model:
    model_description = dai.NNModelDescription(
        "luxonis/gaze-estimation-adas:60x60"
    )
    
    The model expects 3 inputs (multi-input model): cropped left eye image, cropped right eye image and head pose values. You can check the example on how to use .
    Inspect model head(s):
    • RegressionParser that outputs message (3D vector).
    Get parsed output(s):
    while pipeline.isRuning():
        parser_output: Predictions = parser_output_queue.get()
    
    Example
    You can quickly run the model using our example.
    The example demonstrates how to build a 3-stage DepthAI pipeline consisting of a face detectoion model, head pose estimation model, and a gaze estimation model. It automatically downloads the models, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
    To try it out, run:
    python3 main.py
    
    Gaze estimation ADAS
    Fast Gaze estimation model.
    License
    Apache 2.0
    Commercial use
    Downloads
    970
    Tasks
    Regression
    Model Types
    ONNX
    Model Variants
    NameVersionAvailable ForCreated AtDeploy
    RVC2, RVC48 months ago
    Luxonis - Robotic vision made simple.
    XYouTubeLinkedInGitHub