You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When test other cad model in lm dataset, such as obj_000015.ply, not obj_000005.ply, it will arise CUDA out of memory problem.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.10 GiB (GPU 0; 44.53 GiB total capacity; 34.78 GiB already allocated; 7.57 GiB free; 36.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Elapsed time: 126 seconds
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
The text was updated successfully, but these errors were encountered:
When test other cad model in lm dataset, such as obj_000015.ply, not obj_000005.ply, it will arise CUDA out of memory problem.
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 19.10 GiB (GPU 0; 44.53 GiB total capacity; 34.78 GiB already allocated; 7.57 GiB free; 36.44 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
Elapsed time: 126 seconds
This is my GPU
Sat Oct 12 02:15:11 2024
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.23.08 Driver Version: 545.23.08 CUDA Version: 12.3 |
|-----------------------------------------+----------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+======================+======================|
| 0 NVIDIA L40S On | 00000000:00:0A.0 Off | 0 |
| N/A 40C P0 83W / 350W | 13211MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 1 NVIDIA L40S On | 00000000:FD:04.0 Off | 0 |
| N/A 41C P0 84W / 350W | 1723MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 2 NVIDIA L40S On | 00000000:FD:05.0 Off | 0 |
| N/A 31C P8 32W / 350W | 6MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 3 NVIDIA L40S On | 00000000:FD:06.0 Off | 0 |
| N/A 32C P8 33W / 350W | 6MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 4 NVIDIA L40S On | 00000000:FD:07.0 Off | 0 |
| N/A 31C P8 31W / 350W | 6MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 5 NVIDIA L40S On | 00000000:FF:00.0 Off | 0 |
| N/A 34C P8 33W / 350W | 6MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 6 NVIDIA L40S On | 00000000:FF:02.0 Off | 0 |
| N/A 31C P8 33W / 350W | 6MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
| 7 NVIDIA L40S On | 00000000:FF:03.0 Off | 0 |
| N/A 38C P0 79W / 350W | 435MiB / 46068MiB | 0% Default |
| | | N/A |
+-----------------------------------------+----------------------+----------------------+
+---------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=======================================================================================|
+---------------------------------------------------------------------------------------+
The text was updated successfully, but these errors were encountered: