You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Encountered ERROR: The NVIDIA Driver is present, but CUDA failed to initialize. GPU functionality will not be available. error after upgrading CUDA Toolkit to 12.5
#7379
Open
jackylu0124 opened this issue
Jun 26, 2024
· 0 comments
Description
Previously I had CUDA Toolkit 12.1 on my Windows machine, and I was able to run the nvcr.io/nvidia/tritonserver:24.04-py3
Docker container with no issues at all. Today I uninstalled CUDA Toolkit 12.1 and installed the latest CUDA Toolkit 12.5, and when I tried to run the nvcr.io/nvidia/tritonserver:24.04-py3 Docker container, the inference server logs the following:
ERROR: The NVIDIA Driver is present, but CUDA failed to initialize. GPU functionality will not be available.
[[ Named symbol not found (error 500) ]]
Triton Information nvcr.io/nvidia/tritonserver:24.04-py3
Are you using the Triton container or did you build it yourself?
I am using the nvcr.io/nvidia/tritonserver:24.04-py3 container.
Expected behavior
I should be able to run the inference server container as I was able to before.
Issue Reproduction
The following is the nvcc --version and nvidia-smi command output running from my host machine:
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2024 NVIDIA Corporation
Built on Wed_Apr_17_19:36:51_Pacific_Daylight_Time_2024
Cuda compilation tools, release 12.5, V12.5.40
Build cuda_12.5.r12.5/compiler.34177558_0
Description
Previously I had CUDA Toolkit 12.1 on my Windows machine, and I was able to run the
nvcr.io/nvidia/tritonserver:24.04-py3
Docker container with no issues at all. Today I uninstalled CUDA Toolkit 12.1 and installed the latest CUDA Toolkit 12.5, and when I tried to run the
nvcr.io/nvidia/tritonserver:24.04-py3
Docker container, the inference server logs the following:Triton Information
nvcr.io/nvidia/tritonserver:24.04-py3
Are you using the Triton container or did you build it yourself?
I am using the
nvcr.io/nvidia/tritonserver:24.04-py3
container.Expected behavior
I should be able to run the inference server container as I was able to before.
Issue Reproduction
The following is the
nvcc --version
andnvidia-smi
command output running from my host machine:The following is the
nvcc --version
andnvidia-smi
command output running from inside the Docker container:Thanks for your help in advance!
The text was updated successfully, but these errors were encountered: