Feature points detector and descriptor. | |
License | Apache 2.0 Commercial use |
Downloads | 1495 |
Tasks | Feature Detection |
Model Types | ONNX |
| Name | Version | Available For | Created At | Deploy |
|---|---|---|---|---|
| RVC2, RVC4 | About 1 year ago | |||
| RVC2, RVC4 | About 1 year ago | |||
| RVC2, RVC4 | About 1 year ago | |||
| RVC2, RVC4 | About 1 year ago |
| Metric | Value |
|---|---|
| AUC@5° | 50.20 |
| AUC@10° | 65.40 |
| AUC@20° | 77.10 |
| ACC@10°. | 85.1 |
['batch', 3, 'height', 'width'] • Output shapes: [['Convfeats_dim_0', 64, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convkeypoints_dim_0', 65, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convfeats_dim_0', 1, 'Sigmoidheatmaps_dim_2', 'Sigmoidheatmaps_dim_3']]| Platform | Precision | Throughput (infs/sec) | Power Consumption (W) |
|---|---|---|---|
| RVC2 | FP16 | 38.80 | N/A |
| RVC4 | INT8 | 388.10 | 3.69 |
['batch', 3, 'height', 'width'] • Output shapes: [['Convfeats_dim_0', 64, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convkeypoints_dim_0', 65, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convfeats_dim_0', 1, 'Sigmoidheatmaps_dim_2', 'Sigmoidheatmaps_dim_3']]| Platform | Precision | Throughput (infs/sec) | Power Consumption (W) |
|---|---|---|---|
| RVC2 | FP16 | 38.82 | N/A |
| RVC4 | INT8 | 387.55 | 3.41 |
['batch', 3, 'height', 'width'] • Output shapes: [['Convfeats_dim_0', 64, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convkeypoints_dim_0', 65, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convfeats_dim_0', 1, 'Sigmoidheatmaps_dim_2', 'Sigmoidheatmaps_dim_3']]| Platform | Precision | Throughput (infs/sec) | Power Consumption (W) |
|---|---|---|---|
| RVC2 | FP16 | 148.65 | N/A |
| RVC4 | INT8 | 685.82 | 2.73 |
['batch', 3, 'height', 'width'] • Output shapes: [['Convfeats_dim_0', 64, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convkeypoints_dim_0', 65, 'Convfeats_dim_2', 'Convfeats_dim_3'], ['Convfeats_dim_0', 1, 'Sigmoidheatmaps_dim_2', 'Sigmoidheatmaps_dim_3']]| Platform | Precision | Throughput (infs/sec) | Power Consumption (W) |
|---|---|---|---|
| RVC2 | FP16 | 148.65 | N/A |
| RVC4 | INT8 | 688.83 | 2.57 |
pip install depthai
pip install depthai-nodes
model_description = dai.NNModelDescription(
"luxonis/xfeat:mono-320x240"
)
nn = pipeline.create(ParsingNeuralNetwork).build(
<CameraNode>, model_description
)
# Get the parser
parser: XFeatMonoParser = nn.getParser(0)
# Inside pipeline loop
if cv2.waitKey(1) == ord('s'):
parser.setTrigger()
model_description = dai.NNModelDescription(
"luxonis/xfeat:stereo-320x240"
)
nn_archive_path = dai.getModelFromZoo(model_description)
nn_archive = dai.NNArchive(nn_archive_path)
input_shape = nn_archive.getConfig().model.inputs[0].shape[2:][::-1]
left_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
right_cam = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
left_network = pipeline.create(dai.node.NeuralNetwork).build(
left_cam.requestOutput(input_shape,type=dai.ImgFrame.Type.BGR888p),
nn_archive
)
left_network.setNumInferenceThreads(2)
right_network = pipeline.create(dai.node.NeuralNetwork).build(
right_cam.requestOutput(input_shape,type=dai.ImgFrame.Type.BGR888p),
nn_archive
)
right_network.setNumInferenceThreads(2)
ParserGenerator node because we already have neural network nodes ready:parsers = pipeline.create(ParserGenerator).build(nn_archive)
parser: XFeatStereoParser = parsers[0]
age=0 for left image and age=1 for right image).while pipeline.isRuning():
parser_output: dai.TrackedFeatures = parser_output_queue.get()
mono and stereo. In mono mode we use a single camera as an input and match the frames to the reference image. In stereo mode we use two cameras to match the frames to each other. In mono mode we visualize the matches between the frames and the reference frame which can be set by pressing s key in the visualizer. In stereo mode we visualize the matches between the frames from the left and right camera.python3 main.py