Getting error message as IBuilderConfig’ object has no attribute ‘set_memory_pool_limit’

Issue Overview

Users are encountering an error related to the Nvidia Jetson Orin Nano Dev board while executing a Python script that utilizes TensorRT for inference. The specific error message states that the IBuilderConfig object has no attribute set_memory_pool_limit. This issue arises during the execution of the sample code provided in the eLinux documentation for TensorRT and ONNX integration.

Symptoms and Context

  • Error Message:

    • AttributeError: ‘tensorrt.tensorrt.IBuilderConfig’ object has no attribute ‘set_memory_pool_limit’
  • Execution Environment:

    • The issue occurs when running the script infer.py using Python 3.6.
  • Hardware Specifications:

    • The problem is reported on the Jetson Nano board.
  • Software Specifications:

    • Users are utilizing TensorRT version 8.0.1.6, with one user mentioning successful execution using TensorRT version 8.5.
  • Frequency:

    • This issue appears consistently when using the specific line of code in question.
  • Impact:

    • The error prevents users from running their inference program, significantly affecting their development workflow.

Possible Causes

  • API Changes:

    • The method set_memory_pool_limit may have been deprecated or removed in the version of TensorRT being used (8.0.1.6).
  • Version Mismatch:

    • Users may be using different versions of TensorRT, leading to discrepancies in available methods and functionality.
  • Code Compatibility:

    • The example code may not be fully compatible with older versions of TensorRT, necessitating updates to method calls.
  • Configuration Errors:

    • Incorrect configuration settings or assumptions about available APIs could lead to such errors.

Troubleshooting Steps, Solutions & Fixes

  1. Comment Out Problematic Line:

    • As a quick workaround, comment out the line causing the error:
      # config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 30)
      
  2. Modify Code for Compatibility:

    • Replace deprecated method calls with alternatives as follows:
      • Change:

        size = trt.volume(engine.get_tensor_shape(binding)) * batch
        

        To:

        size = trt.volume(engine.get_binding_shape(binding)) * batch
        
      • Change:

        if engine.get_tensor_mode(binding) == trt.TensorIOMode.INPUT:
        

        To:

        if engine.binding_is_input(binding) == True:
        
  3. Check TensorRT Version:

    • Verify the version of TensorRT being used by running:
      pip show nvidia-tensorrt
      
  4. Update to Latest Version:

    • If feasible, consider upgrading to a more recent version of TensorRT (e.g., version 8.5) where the example code has been verified to work.
  5. Consult Documentation:

    • Review the official Nvidia documentation for TensorRT to identify any changes in API methods between versions.
    • Refer to community forums or Nvidia’s developer resources for additional insights on deprecated methods and their alternatives.
  6. Testing in Isolation:

    • Run simplified versions of the script with minimal configurations to isolate which specific lines or configurations trigger errors.
  7. Seek Community Support:

    • If issues persist after applying these changes, consider posting detailed queries on forums or community boards where other developers might have faced similar challenges.

Unresolved Aspects

  • Users are still seeking clarification on what API or method should replace config.set_memory_pool_limit(trt.MemoryPoolType.WORKSPACE, 1 << 30) in newer versions of TensorRT.

  • Further investigation may be needed regarding compatibility issues between different TensorRT versions and their respective APIs.

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *