ModuleNotFoundError: No module named ‘onnx_graphsurgeon’ on Jetson Nano

Issue Overview

Users are encountering a ModuleNotFoundError when trying to execute the convert_to_onnx.py program from the /usr/src/tensorrt/samples/python/onnx_packnet directory on the Jetson Nano board. The specific error message is "ModuleNotFoundError: No module named ‘onnx_graphsurgeon’". This error occurs when running the script with Python 3.6, indicating that the required ‘onnx_graphsurgeon’ module is not installed or not accessible in the Python environment.

Possible Causes

  1. Missing ONNX GraphSurgeon package: The ‘onnx_graphsurgeon’ module is not installed in the Python environment used by the Jetson Nano.

  2. Incorrect installation method: The user may have attempted to install the package using an incompatible or incomplete method.

  3. Outdated TensorRT samples: The samples provided in the /usr/src/tensorrt/samples/ directory might be outdated and not aligned with the installed TensorRT version.

  4. Environment configuration issues: The Python environment may not be correctly set up to include the necessary paths for TensorRT and its dependencies.

Troubleshooting Steps, Solutions & Fixes

  1. Install ONNX GraphSurgeon from NVIDIA’s PyPI index:
    Try installing the package using pip with NVIDIA’s custom index:

    python3 -m pip install onnx_graphsurgeon --index-url https://pypi.ngc.nvidia.com
    

    If this doesn’t resolve the issue, proceed to the next steps.

  2. Build and install ONNX GraphSurgeon from source:
    Follow these steps to build and install the package from the TensorRT GitHub repository:

    a. Clone the TensorRT repository:

    git clone -b release/8.5 https://github.com/NVIDIA/TensorRT.git
    

    b. Navigate to the onnx-graphsurgeon directory:

    cd TensorRT/tools/onnx-graphsurgeon/
    

    c. Build the package:

    make build
    

    d. Install the built package:

    sudo python3 -m pip install ./dist/onnx_graphsurgeon-0.3.26-py2.py3-none-any.whl
    

    This process should successfully install onnx-graphsurgeon and its dependencies.

  3. Verify the installation:
    After installation, verify that the package is correctly installed by running:

    python3 -c "import onnx_graphsurgeon; print(onnx_graphsurgeon.__version__)"
    

    This should print the version of onnx_graphsurgeon without any errors.

  4. Update TensorRT samples:
    If the issue persists, consider updating the TensorRT samples to ensure compatibility with your installed TensorRT version. Check the NVIDIA Developer website for the latest TensorRT version compatible with your Jetson Nano and download the appropriate samples.

  5. Check Python environment:
    Ensure that you’re using the correct Python environment. On Jetson Nano, it’s recommended to use Python 3.6 or later. Verify the Python version with:

    python3 --version
    
  6. Inspect PYTHONPATH:
    Make sure the PYTHONPATH environment variable includes the directory where onnx_graphsurgeon is installed. You can check this with:

    echo $PYTHONPATH
    

    If needed, add the appropriate path:

    export PYTHONPATH=$PYTHONPATH:/path/to/onnx_graphsurgeon
    
  7. Reinstall dependencies:
    If issues persist, try reinstalling ONNX and other related dependencies:

    sudo pip3 install --upgrade onnx
    

By following these steps, users should be able to resolve the ModuleNotFoundError for ‘onnx_graphsurgeon’ and successfully run the convert_to_onnx.py script on their Jetson Nano board.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *