Jetson Orin Nano Dev Board Pods Stuck in Containers Creating State
Issue Overview
Users of the Nvidia Jetson Orin Nano Dev Board have reported an issue where Kubernetes pods remain in a "ContainerCreating" state indefinitely. This problem primarily occurs when attempting to run k3s, a lightweight Kubernetes distribution, on the device.
Symptoms and Context
- Pods are not being created successfully, as evidenced by the output of the command
kubectl get pods -A
, which shows multiple pods stuck in "ContainerCreating" status. - The issue manifests after executing the installation command for k3s, which appears to complete without errors, but subsequent commands reveal that no containers are running.
- The error logs indicate problems related to cgroup configurations, particularly failures to create pod sandboxes due to missing files in the cgroup directory.
Hardware and Software Specifications
- The user is running a custom kernel with specific configurations enabled for iSCSI TCP support and scheduling options.
- The Jetson Orin Nano is booting from an SSD and is running the latest version of Jetpack (6.0+b106).
- Kernel version: 5.15.136-rt-tegra.
Frequency and Impact
This issue seems to occur consistently when using the custom kernel, while reverting to a standard kernel allows pods to be created successfully. The impact on user experience is significant, as it prevents the deployment of applications within Kubernetes, limiting the functionality of the development board.
Possible Causes
-
Hardware Incompatibilities or Defects: Custom kernel configurations may not be fully compatible with k3s or Docker’s requirements.
-
Software Bugs or Conflicts: Issues within the Nvidia container runtime or k3s itself may lead to conflicts when trying to create containers.
-
Configuration Errors: Incorrect settings in Docker or Kubernetes configurations could prevent proper initialization of containers.
-
Driver Issues: The use of outdated or improperly configured Nvidia drivers may affect container execution.
-
Environmental Factors: The specific setup (e.g., SSD booting) might introduce unforeseen issues related to file system access or performance.
-
User Errors or Misconfigurations: Misconfigurations during kernel compilation or Docker setup could lead to these problems.
Troubleshooting Steps, Solutions & Fixes
-
Verify Kernel Configuration:
- Ensure that all necessary kernel options are enabled for container support. Consider using a standard kernel if issues persist with custom configurations.
- Disable real-time scheduling configurations if they are not required for your application.
-
Check Docker Configuration:
- Review the Docker daemon configuration file located at
/etc/docker/daemon.json
for correctness. Ensure that it specifies the Nvidia runtime properly. - Example configuration:
{ "runtimes": { "nvidia": { "args": [], "path": "nvidia-container-runtime" } } }
- Review the Docker daemon configuration file located at
-
Update Nvidia Container Toolkit:
- Run
sudo nvidia-ctk cdi generate --output=/etc/cdi/nvidia.yaml --mode=csv
to ensure that CDI devices are correctly registered. - Check if the correct version of CUDA is installed and compatible with your hardware.
- Run
-
Test with Standard Kernel:
- If using a custom kernel, revert back to a standard kernel version known to work with k3s and check if pods can be created successfully.
-
Inspect Logs for Detailed Errors:
- Use
kubectl describe pod <pod-name> -n <namespace>
to gather detailed information about why specific pods are failing. - Look for cgroup-related errors in the logs that might indicate misconfigurations.
- Use
-
Run Diagnostic Containers:
- Use diagnostic containers like
nvcr.io/nvidia/l4t-cuda
to verify that the GPU and CUDA environment are functioning correctly. - Example command:
docker run --rm -ti --runtime=nvidia nvcr.io/nvidia/l4t-cuda:12.2.12-devel /bin/bash
- Use diagnostic containers like
-
Consult Documentation and Community Resources:
- Refer to official Nvidia documentation regarding Jetson devices and k3s setups.
- Engage with community forums for additional insights or similar experiences from other users.
-
Monitor Resource Availability:
- Ensure that sufficient resources (CPU, memory) are available on the Jetson Orin Nano for running k3s and its associated pods.
Recommended Approach
Multiple users have successfully resolved this issue by disabling real-time configurations in their custom kernels, leading to proper pod initialization and operation within k3s. If you encounter similar problems, consider this as a primary troubleshooting step.
Additional Notes
The issue appears complex due to interactions between custom kernel settings, Docker configurations, and Nvidia’s runtime environment. Further investigation may be required if problems persist after following these troubleshooting steps.