This resource and insight hub provides education, breaking news, our research, and more to benefit developers, corporate leadership, academics, marketing and business development professionals, and even those who are new to the concept of AI.
Running large language models at the edge is becoming practical as embedded platforms continue to scale in performance. In this post, we show how to run OpenAi's GPT-OSS-20B and GPT-OSS-120B models on the NVIDIA Jetson Thor, focusing on real, reproducible inference! Since these are open-weight models, they can be freely tested on different hardware, which makes it interesting to see how they behave beyond data-center systems. We already tried out the GPT-OSS-120B model on a N
Running RF-DETR with DeepStream is a straightforward way to bring a modern, open-source detector into ready-to-use video analytics pipelines from DeepStream. In this post, we will show what RF-DETR is, how the DeepStream RF-DETR wrapper plugs into nvinfer, and the performance we measured on NVIDIA Jetson Thor, alongside a comparison point for the NVIDIA Jetson Orin AGX.