You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm able to run inference on a Kitti 2015 dataset.
Do you know how can I run prediction on Middlebury 2014 with a single GPU with 24GB?
It always run out of memory, Should I downsize the input?
I'm using MiddEval3-data-H -> 1000 x 1500 size
Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 5.49 GiB (GPU 0; 23.68 GiB total capacity; 16.71 GiB already allocated; 3.46 GiB free; 18.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/skip_model_3d.py", line 47, in forward
s1 = F.interpolate(s1, [feature_size_d, feature_size_h, feature_size_w], mode='trilinear', align_corners=True)
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/skip_model_3d.py", line 155, in forward
out10= self.cells[10](out9[0], out9[1])
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/LEAStereo.py", line 41, in forward
cost = self.matching(cost)
File "/home/andreaa/dev/stereo_depth/LEAStereo/utils/multadds_count.py", line 21, in comp_multadds
_ = model(input_data, input_data)
File "/home/andreaa/dev/stereo_depth/LEAStereo/predict.py", line 48, in
mult_adds = comp_multadds(model, input_size=(3,opt.crop_height, opt.crop_width)) #(3,192, 192))
The text was updated successfully, but these errors were encountered:
andrea-unity
changed the title
predict.py GPU running out of memory
predict_md.sh (Middlebury 2014 dataset) GPU running out of memory
May 5, 2022
I'm able to run inference on a Kitti 2015 dataset.
Do you know how can I run prediction on Middlebury 2014 with a single GPU with 24GB?
It always run out of memory, Should I downsize the input?
I'm using MiddEval3-data-H -> 1000 x 1500 size
Exception has occurred: RuntimeError
CUDA out of memory. Tried to allocate 5.49 GiB (GPU 0; 23.68 GiB total capacity; 16.71 GiB already allocated; 3.46 GiB free; 18.42 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/skip_model_3d.py", line 47, in forward
s1 = F.interpolate(s1, [feature_size_d, feature_size_h, feature_size_w], mode='trilinear', align_corners=True)
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/skip_model_3d.py", line 155, in forward
out10= self.cells[10](out9[0], out9[1])
File "/home/andreaa/dev/stereo_depth/LEAStereo/retrain/LEAStereo.py", line 41, in forward
cost = self.matching(cost)
File "/home/andreaa/dev/stereo_depth/LEAStereo/utils/multadds_count.py", line 21, in comp_multadds
_ = model(input_data, input_data)
File "/home/andreaa/dev/stereo_depth/LEAStereo/predict.py", line 48, in
mult_adds = comp_multadds(model, input_size=(3,opt.crop_height, opt.crop_width)) #(3,192, 192))
The text was updated successfully, but these errors were encountered: