Our new model ZOO works with DepthAI V3. Find out more in our documentation.
1+ Likes
Model Details
Model Description
Paddle Text detection is an efficient and flexible text detection system designed for real-world applications. It supports over 80 languages and is optimized for deployment in resource-constrained environments. It achieves high recognition performance on multi-language text, including complex scripts, while remaining efficient for real-time use. It also offers robust capabilities for detecting and recognizing text in challenging conditions such as low-quality images or varying text orientations.
Developed by: PaddlePaddle
Shared by:
Model type: Computer vision
License:
Resources for more information:
Training Details
Training Data
The model was pretrained on dataset then finetuned on different real-world datasets.
Testing Details
Metrics
Metrics are taken from the original . Tests were performed on the and datasets.
Dataset
Precision
Recall
F-score
MSRA-TD500
90.4
76.3
82.8
CTW1500
84.8
77.5
81.0
Technical Specifications
Input/Output Details
Input:
Name: x
Info: 0-255 BGR un-normalized image.
Output:
Name: output
Info: A segmentation mask over the entire image, each output value corresponds to the probability of the input pixel being part of text.
Model Architecture
Backbone: ResNet18, the backbone consists of 18 layers with residual connections.
Segmentation Head: , allowing the network to dynamically learn and optimize the binarization process during training. This head enhances the model’s ability to distinguish between text and background.
* Benchmarked with , using 2 threads (and the DSP runtime in balanced mode for RVC4).
* Parameters and FLOPs are obtained from the package.
Utilization
Models converted for RVC Platforms can be used for inference on OAK devices.
DepthAI pipelines are used to define the information flow linking the device, inference model, and the output parser (as defined in model head(s)).
Below, we present the most crucial utilization steps for the particular model.
Please consult the docs for more information.
while pipeline.isRuning():
parser_output: ImgDetectionsExtended = parser_output_queue.get()
Example
You can quickly run the model using our script.
It automatically downloads the model, creates a DepthAI pipeline, runs the inference, and displays the results using our DepthAI visualizer tool.
To try it out, run: