How to Run the Axelera Metis M.2 on Raspberry Pi 5
- Daniela Brenes
- Jul 16
- 5 min read
Updated: Aug 27

The Axelera Metis M.2 on Raspberry Pi 5 forms a powerful and affordable combination that caters to both tech enthusiasts and production-ready applications. The Metis M.2 brings robust Deep Learning capabilities, enabling real-time tasks like:
Image classification
Object detection
Semantic / instance segmentation
Keypoint detection
Pose estimation
Depth estimation
License plate recognition
Image enhancement / superresolution
Large Language Models (LLM)
while the Raspberry Pi5 keeps running the system general tasks smoothly.
This Axelera module runs the industry's most popular and battle tested models such as:
YOLOv3 - YOLOv11, including segmentation and pose estimation versions
MobileNetv2 - MobileNetv4, including SSD
ResNet18 - ResNet101
Llama 3.2
Phi 3 mini
Simultaneously, the Raspberry Pi 5 enhances the overall experience by handling:
User Interfaces (GUIs): Creating intuitive and responsive user interactions.
Web Applications and Network Connectivity: Managing communication and data exchange seamlessly.
Multimedia Handling: Supporting features like snapshots, video recording, and more.
Together, they provide a versatile platform that bridges the gap between prototyping and production-ready solutions.
In this guide, we will walk you through the process of setting up and running a YOLOv8 object detection model using the Axelera Metis M.2 on Raspberry Pi 5 setup.
Setting Up the Axelera Metis M.2 on the Raspberry Pi 5
Gather Your Components
To assemble the Axelera Metis on Raspberry Pi 5 duo, you will need the following components:
A Raspberry Pi 5: 4GB, 8GB or 16GB, preferably 8GB or higher
Micro SD card: 16GB or higher. A high speed SD card is recommended for a smoother experience
A keyboard, and a mouse
All required components for setting up Axelera Metis M.2 with Raspberry Pi 5
Attach the Raspberry Pi M.2 HAT+
Follow the instructions in the official Raspberry Pi instructions on how to assemble the M.2 HAT+.
Assembled Raspberry Pi M.2 HAT. The Axelera Metis M.2 card doesn’t fit properly onto the Raspberry Pi M.2 HAT, making it impossible to screw down securely. You'll likely need a workaround to keep the card flat and stable. In our case, we used rubber bands to hold it in place!
Axelera Metis M.2 with heatsink installed on Raspberry Pi M.2 HAT+ Connect the Micro HDMI Cable
Plug one end of the micro HDMI cable into the Raspberry Pi 5 and the other end into your display.
Connecting the micro HDMI cable to the Raspberry Pi 5 Connect the Keyboard and Mouse
Plug the keyboard and mouse into the USB ports on the Raspberry Pi 5.
Keyboard and mouse connected via USB to the Raspberry Pi 5. Connect the Power Supply
Plug the power supply into the Raspberry Pi 5, but do not power it on just yet.
Raspberry Pi 5 power supply connection.
We are not inserting the SD card yet because we will flash it with the Raspberry Pi OS in the next section.
Preparing Raspberry Pi 5 for Axelera Deep Learning Inference
Install the official Raspberry Pi OSÂ onto the SD card:Â follow the instructions provided on the official Raspberry Pi website.
Insert the SD card into the Raspberry Pi: gently insert the flashed SD card into the SD card slot on the underside of the Raspberry Pi.
Power on the Raspberry Pi: make sure the monitor is connected to the Raspberry Pi via the HDMI cable before you power it on for the first time. If no monitor is detected, the Raspberry Pi will be configured in headless mode, which means that the user interface will not be available from now on.
Make sure the Raspberry Pi firmware is up to date:
Enable Raspberry Pi PCI Gen 3.0 speeds:
Install Docker:Â follow the official instructions in the Docker website.
Configure Docker:
Installing Voyager SDK for Axelera Metis M.2 on Raspberry Pi 5
The Voyager SDK is the framework you use to build applications that use the Axelera Metis M.2. It is open source and currently hosted on GitHub. With the Voyager SDK you can:
Use models from the Model Zoo
Deploy your own models from PyTorch, ONNX, or other frameworks
Measure the quality and performance of your custom models.
Build optimized end-to-end deep learning applications
While Voyager SDK does not currently support the Raspberry Pi 5 operating system, Voyager SDK runs just fine within a Docker container.
Install the Metis M.2 driver
Verify that the Axelera device is recognized by the system: your lspci | grep Axelera output should be similar to the following
If you don’t find the output above, you might just need to update the PCI IDs. First, confirm the device is being recognized by looking for the following ID on the lspci output:
If you do, update the PCI IDs.
Now, if you run lspci | grep Axelera again, the full Axelera vendor and card name should display as expected.
Create the Docker container:Â make sure to use the command below with all the options.
Install dependencies
Generate an Axelera API token
Navigate to https://software.axelera.ai/ui/login
Choose the Customers portal
Create an Axelera AI account and log in
Click on your profile picture (upper right corner)
Select Edit Profile
Axelera AI customer portal: menu for account configuration Scroll down to Generate an Identity Token
Axelera AI customer portal: token generation Add a meaningful description.
Copy and securely save the generated token.
Run the Voyager SDK installer: Note that <email> and <token> correspond to the username you used to log in to the Axelera site and the token you just generated, respectively. Also, note the --no-driver option, as we installed the driver manually in the host.
Running YOLOv8 Object Detection on Raspberry Pi 5 with Metis M.2
Activate the Voyager SDK Python environment: You’ll need to activate the environment after every reboot.
Download pre-built models: while not strictly necessary, you can download the models already converted for the Metis M.2. Otherwise, Voyager SDK will convert them at runtime, which can take up to 10 minutes in the Raspberry Pi 5.
Run the Yolov8s model: At this point you should see the Metis M.2 performing object detection in real time!
Run the optimized pipeline:Â The Voyager SDK enables optimized video rendering on hosts that support OpenGL or OpenGL ES, instead of relying on OpenCV, which is the default. Note that pre and post processing still utilize the CPU.
Troubleshooting Common Issues with Axelera on Raspberry Pi 5
Inference crashes
If you experience inference crashes while running models, it may be due to power limitations. This can especially affect larger models that demand higher computational throughput.Â
To improve stability, Axelera AI recommends setting the MVM utilization to a lower value, such as 30%. Here’s an example of how to apply the MVM utilization limit when running the YOLOv8l model:
Contact Us
The Axelera Metis M.2 is a great hardware addition for your project if it needs real-time deep learning inference. At RidgeRun.ai, we specialize in Deep Learning and AI solutions at the edge, using hardware partners like the Axelera modules. If you need help bringing your project to life, please don’t hesitate to contact us at contactus@ridgerun.ai.