Running RF-DETR with DeepStream on NVIDIA Jetson Thor
- ridgerun

- 16 hours ago
- 4 min read

What is RF-DETR?
RF-DETR is a state-of-the-art, transformer-based model for object detection and instance segmentation developed by Roboflow and released as an open-source project under the Apache-2.0 license, making it commercial-friendly and suitable for production deployments.
A key reason RF‑DETR has drawn attention is its strong benchmark results, it’s reported as the first real-time model to exceed 60 AP on the Microsoft COCO benchmark, demonstrating that transformer-based detectors can now match or surpass YOLO-family models in real-time accuracy while remaining practical for deployment.
Key features of RF‑DETR include:
YOLO-class real-time accuracy: RF-DETR delivers accuracy comparable to, and in some cases exceeding, leading YOLO models, while maintaining real-time performance, it surpasses the 60% mAP barrier on COCO and achieves state-of-the-art results on the RF100-VL benchmark.
Multiple Model Variants: To cater to different deployment needs, RF-DETR is available in multiple sizes (Nano, Small, Medium, Large) that balance accuracy and speed. Lighter variants (Nano/Small) can run even faster on constrained hardware, while larger variants offer higher accuracy.
Edge-Friendly & Open-Source: The combination of efficient model sizes and an Apache-2.0 license makes RF-DETR well suited for edge devices like NVIDIA Jetson, without licensing constraints that complicate commercial use.
DeepStream RF-DETR
DeepStream RF-DETR is an open-source wrapper that makes it possible to run RF-DETR models inside NVIDIA DeepStream pipelines by providing both the parsing logic and example configurations for the DeepStream nvinfer plugin. It fills the gap between RF-DETR’s ONNX/TensorRT engines and DeepStream’s metadata expectations so bounding boxes, class IDs, and confidence scores are correctly interpreted by the pipeline at runtime.

What you get out of the box:
Supported model sizes: The project includes support for multiple sizes of RF-DETR models — Nano, Small, Medium, and Large — enabling tradeoffs between speed and detection quality.
Precision modes: Allows you to select between full precision and faster, lighter inference.
FP32: Fully supported and stable results.
FP16: Available and show higher framerate, but detection quality may be degraded. This precision mode should be validated before use.
DeepStream version compatibility: Tested with DeepStream 7.0 and 8.0, ensuring you can integrate with those major releases out of the box.
How it works
Parser library: The build process produces a shared library that implements a custom bounding-box parser for RF-DETR outputs. This library is referenced in the nvinfer config so DeepStream can interpret raw model outputs as detection metadata.
Configuration files: Example nvinfer configs in the repo include all the necessary properties (onnx-file, model-engine-file, parse-bbox-func-name, class count, input shape, etc.) for RF-DETR models to run seamlessly in DeepStream apps.
Model integration: Scripts in the repository help download RF-DETR ONNX weights for different model sizes, making it easy to generate TensorRT engines and swap models in your pipeline.
This setup lets you plug RF-DETR into DeepStream pipelines easily; drop in the parser lib, point nvinfer at the provided config file, and DeepStream handles the rest.
Installing Deepstream RF-DETR on Jetson Orin Thor
Prerequisites:
Jetson Orin Thor with JetPack 7.1 and Deepstream 8.0.
A working DeepStream + GStreamer environment.
1. Clone and build the parser library
On a system with DeepStream installed, build with make to produce the shared library used by nvinfer.
git clone https://github.com/ridgerun-ai/deepstream-rfdetr.git
cd deepstream-rfdetr
make2. Download RF-DETR ONNX weights
# Install uv if needed: https://docs.astral.sh/uv/getting-started/installation/
# Then download:
uv run ./download_weights.py rfdetr-nano
# Output: rfdetr-nano.onnx (in the current directory)Supported IDs:
rfdetr-nano
rfdetr-small
rfdetr-medium
rfdetr-large
rfdetr-base (deprecated)
3. Run a minimal pipeline
Use this simple pipeline showing RF-DETR inference in DeepStream. Change the input and output paths as needed.
$ OUTPUT_FILE=output.mp4
$ gst-launch-1.0 -e filesrc location=/opt/nvidia/deepstream/deepstream/samples/streams/sample_1080p_h264.mp4 ! decodebin ! queue ! mux.sink_0 nvstreammux name=mux width=1920 height=1080 batch-size=1 ! nvinfer config-file-path=deepstream_rfdetr_bbox_config.txt ! queue ! nvdsosd ! nvv4l2h264enc ! h264parse ! queue ! mp4mux ! filesink location=$OUTPUT_FILEJetson Orin Thor Performance Results
We measured framerate (FPS) running RF-DETR with DeepStream on Jetson AGX Thor and compared it to two reference platforms (Jetson AGX Orin and DGX Spark) across all RF-DETR model sizes: Nano, Small, Medium, Base, and Large. It is shown separately for FP32 and FP16 precision.
Framerate comparison


This comparison highlights how Jetson AGX Thor stacks up not just against another edge platform (Jetson AGX Orin) but also against a server-class system (DGX Spark) when running this detection model with DeepStream.
Taking Edge AI Performance Further? We Can Help
The NVIDIA Jetson Thor delivers strong throughput running RF-DETR with DeepStream, but getting the most out of edge AI systems often requires more than just hardware. RidgeRun.ai specializes in optimizing AI performance, streamline deployment, and build scalable solutions that run efficiently on platforms like Jetson.
We can assist with Model optimization and TensorRT tuning, DeepStream pipeline refinement and efficient GStreamer integration, custom model integration, benchmarking and performance measurements.
Whether you are deploying RF-DETR or other detection models in robotics, autonomous systems, smart cities, or industrial vision, our team can help accelerate your path from prototype to production.
Contact RidgeRun’s AI Engineering Services to unlock the full performance of your edge AI deployment, optimize models for DeepStream, and ensure your solutions run efficiently on Jetson Orin Thor and beyond.




Comments