Our new model ZOO works with DepthAI V3. Find out more in our documentation.
0 Likes
Model Details
Model Description
This model is based on YOLOv6 () and has been custom-trained to detect people in thermal images that are "cast" into 3-channel grayscale formats. It is specifically tailored for use with OAK Thermal devices (). This design ensures optimized performance for real-world applications in thermal imaging.
Developed by: Luxonis
Shared by: Luxonis
Model type: Computer Vision
License: Apache 2.0
Resources for more information:
Training Details
Training Data
, the , and synthetic datasets were used to train the model. Annotations were filtered by size because smaller objects are barely visible at the camera’s resolution. The dataset consists of 1.5K FLIR images, 1.5K Roboflow images, and 10K synthetic thermal images.
Testing Details
Metrics
The model was trained at a resolution of 384x512. On the testing set, which comprised 10% of the dataset, the model achieved a mean Average Precision (mAP) of 88%.
Technical Specifications
Input/Output Details
Input:
Name: image
Info: NCHW RGB format with images normalized to a range of 0-1.
Output:
Name: multiple (see NN archive)
Info: Unprocessed outputs of a multitude of detections
Model Architecture
Backbone: EfficientRep backbone
Neck: Rep-PAN neck
Head: Efficient decoupled head that is anchor-free
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Utilization
This model is targeted at usage with . Below, we present the most crucial utilization steps for this particular model.
Install DAIv3 library:
pip install depthai
pip install depthai-nodes
Define thermal node for the camera:
cam = pipeline.create(dai.node.Thermal).build()
The output of the camera is in YUV format whereas the model consumes BGR images so that is why we need to define a HostNode that performs this conversion.
The model is automatically parsed by DAI and it outputs the dai.ImgDetections message (bounding boxes, labels, and scores of the detected objects).
Get model output(s):
while pipeline.isRunning():
nn_output: dai.ImgDetections = parser_output_queue.get()
Example
You can quickly run the model using our example.
The example demonstrates how to build a 1-stage DepthAI pipeline consisting of an thermal person detection model.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.