Orin Nano TensorRT Module Not Found Error with YOLO
Issue Overview
Users are encountering an error while attempting to convert a YOLOv8 model (yolov8n.pt
) to a TensorRT engine (yolov8n.engine
) on the Nvidia Jetson Orin Nano Dev board. The specific error message indicates that TensorRT is not found, despite the user having installed the necessary libraries. The following symptoms have been reported:
- Warning Message: "TensorRT requires GPU export, automatically assigning device=0."
- Error Output:
- "requirements: Ultralytics requirement [‘tensorrt>7.0.0,<=10.1.0’] not found, attempting AutoUpdate…"
- "RuntimeError: TensorRT currently only builds wheels for x86_64 processors."
This issue arises during the model conversion process and is consistent across attempts, significantly impacting the user’s ability to run inference using TensorRT.
The user’s environment includes:
- Hardware: Nvidia Jetson Orin Nano
- Software: Python 3.10.12, PyTorch 2.3.0, ONNX 1.16.1, and Ultralytics YOLOv8.2.70.
- Installed Libraries: Various TensorRT libraries are confirmed to be installed (version 8.6.2.3 with CUDA 12.2).
Possible Causes
Several potential causes for this issue have been identified:
-
Hardware Incompatibility: TensorRT may not fully support ARM architecture or may have limited support on the Jetson platform.
-
Software Bugs or Conflicts: The specific version of TensorRT or PyTorch being used might have compatibility issues.
-
Configuration Errors: Incorrect configuration settings in the YOLO library or environment variables could lead to this issue.
-
Driver Issues: Outdated or incompatible drivers for TensorRT or CUDA may prevent proper functioning.
-
Environmental Factors: Issues such as insufficient power supply or overheating could impact performance.
-
User Errors: Misconfiguration during installation or setup could lead to missing dependencies.
Troubleshooting Steps, Solutions & Fixes
To resolve the issue, follow these troubleshooting steps and potential solutions:
-
Verify TensorRT Installation:
- Run the command:
dpkg -l | grep TensorRT
- Ensure all necessary components are installed as listed in the forum discussion.
- Run the command:
-
Install Missing Packages:
- If any required packages are missing, install them using:
sudo apt-get install tensorrt nvidia-tensorrt-dev python3-libnvinfer-dev
- If any required packages are missing, install them using:
-
Check Compatibility:
- Ensure that the versions of TensorRT and PyTorch are compatible with each other and with your hardware.
-
Update Drivers and Libraries:
- Update CUDA and TensorRT to the latest versions compatible with your Jetson device.
- Check for any available updates using:
sudo apt-get update sudo apt-get upgrade
-
Reinstall Dependencies:
- If issues persist, consider reinstalling the YOLO library and its dependencies in a new virtual environment:
python3 -m venv my-venv source my-venv/bin/activate pip install ultralytics torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
- If issues persist, consider reinstalling the YOLO library and its dependencies in a new virtual environment:
-
Test with Different Models:
- Attempt to convert a simpler model or different version of YOLO to isolate whether the issue is specific to
yolov8n.pt
.
- Attempt to convert a simpler model or different version of YOLO to isolate whether the issue is specific to
-
Run Example Code:
- Use the provided example code to verify functionality after making changes:
from ultralytics import YOLO model = YOLO("yolov8n.pt") model.export( format="engine", dynamic=True, batch=8, workspace=4, int8=True, data="coco.yaml", ) # Load the exported TensorRT INT8 model model = YOLO("yolov8n.engine", task="detect") # Run inference result = model.predict("https://ultralytics.com/images/bus.jpg")
- Use the provided example code to verify functionality after making changes:
-
Consult Documentation:
- Review Nvidia’s official documentation for any specific requirements or setup instructions related to TensorRT on Jetson devices.
-
Community Support:
- If unresolved, consider reaching out to community forums or Nvidia support for further assistance.
By systematically following these steps, users should be able to diagnose and potentially resolve the issue of TensorRT module not being found when using YOLO on the Nvidia Jetson Orin Nano Dev board.