Compile with TensorRT Lib on Jetson Orin Nano Dev Kit in C++
Issue Overview
Users are encountering issues when attempting to use the TensorRT library in C++ programs on the Nvidia Jetson Orin Nano Developer Kit. Specifically, the error arises during the configuration phase with CMake, where the following error message is displayed:
CMake Error at CMakeLists.txt:18 (find_package):
By not providing "FindTensorRT.cmake" in CMAKE_MODULE_PATH this project has
asked CMake to find a package configuration file provided by "TensorRT",
but CMake did not find one.
Could not find a package configuration file provided by "TensorRT" with any
of the following names:
TensorRTConfig.cmake
tensorrt-config.cmake
Add the installation prefix of "TensorRT" to CMAKE_PREFIX_PATH or set
"TensorRT_DIR" to a directory containing one of the above files.
Context
- Hardware: Jetson Orin Nano Developer Kit
- Software: Jetpack 5.1.1, CUDA 11.4.315, cuDNN 8.6.0.166, TensorRT 8.5.2.2
- Installation Method: SD Card Image Method 1
- Frequency: The issue appears consistently when users attempt to compile their projects using TensorRT.
- Impact: This issue prevents users from successfully compiling their applications that rely on TensorRT, hindering development and functionality.
Possible Causes
-
Missing Configuration Files: The absence of
TensorRTConfig.cmake
ortensorrt-config.cmake
files could indicate that TensorRT is not properly installed or configured. -
Incorrect Installation Path: The installation prefix for TensorRT may not be correctly set in
CMAKE_PREFIX_PATH
, leading CMake to fail in locating the necessary files. -
Development Package Not Installed: Users may not have installed the development package for TensorRT, which includes essential configuration files.
-
User Configuration Errors: Users might not have correctly configured their
CMakeLists.txt
or may have overlooked necessary steps in setting up their environment.
Troubleshooting Steps, Solutions & Fixes
Step-by-Step Instructions
-
Verify TensorRT Installation:
- Check if TensorRT is installed by running:
dpkg-query -W tensorrt
- Ensure it returns a version number (e.g.,
8.5.2.2
).
- Check if TensorRT is installed by running:
-
Locate Required Files:
- Check if the required files exist:
ls /usr/lib/aarch64-linux-gnu/libnvinfer* ls /usr/include/aarch64-linux-gnu/NvInfer*
- If these files are missing, it indicates an incomplete installation.
- Check if the required files exist:
-
Set CMAKE_PREFIX_PATH:
- Add the installation prefix of TensorRT to your
CMAKE_PREFIX_PATH
:export CMAKE_PREFIX_PATH=/usr/lib/aarch64-linux-gnu/
- Add the installation prefix of TensorRT to your
-
Create FindTensorRT.cmake File:
- If the above steps do not resolve the issue, create a custom
FindTensorRT.cmake
file similar to the one found here:
FindTensorRT.cmake. - Place this file in the same directory as your
CMakeLists.txt
.
- If the above steps do not resolve the issue, create a custom
-
Modify CMakeLists.txt:
- Ensure your
CMakeLists.txt
includes:find_package(TensorRT REQUIRED)
- Ensure your
-
Run CMake Again:
- After making these changes, run the following commands again in your project directory:
mkdir build && cd build cmake .. make
- After making these changes, run the following commands again in your project directory:
Additional Recommendations
- Ensure all dependencies are installed and up-to-date.
- Regularly check for updates to Jetpack and associated libraries.
- Consult Nvidia’s documentation for any specific configurations required for your version of Jetpack and TensorRT.
Unresolved Aspects
While many users have successfully resolved their issues by following these steps, some may still encounter challenges related to specific configurations or environmental factors that require further investigation. Testing with different setups or consulting additional resources may be necessary for persistent issues.