You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried to run the DDP pre-training script on the UCF101 dataset, but there are many warnings like this:
aug_type 1, threshold: 0.5
=> train from scratch
====================
Loading Dataset from /data/ucf101/mp4, list file: ./data/lists/ucf101/train_split1.txt
9537 samples, 0 missing, 0 Too short.
Log file Created
... (omitted for clarity)
Log: ===================================
lr: 0.001 -> 0.0001
Log: new lr: 0.0001
Ran out of frames. Looping.
Ran out of frames. Looping.
Ran out of frames. Looping.
Ran out of frames. Looping.
Ran out of frames. Looping.
Ran out of frames. Looping.
Ran out of frames. Looping.
...
From this issue report, it seems that the problem is caused by the incorrect frame indices when calling the lintel.loadvid_frame_nums in your code here. I checked the mp4 files of ucf101, they should be okay (could be read by OpenCV with the correct number of frames). Did you ever encounter such an issue?
The text was updated successfully, but these errors were encountered:
I have encountered the same issue before. Refer to dukebw/lintel#31, the number of MP4 frames interpreted by lintel is off by 1 compared with using OpenCV.
In my experiment, the warning would be gone when reading the [1, vlen] frames for HMDB51 and [0, vlen-1] frames for UCF101 (Here "vlen" indicates the number of frames counted by OpenCV).
Thanks for sharing the code for this great work!
I tried to run the
DDP
pre-training script on the UCF101 dataset, but there are many warnings like this:From this issue report, it seems that the problem is caused by the incorrect frame indices when calling the
lintel.loadvid_frame_nums
in your code here. I checked the mp4 files of ucf101, they should be okay (could be read by OpenCV with the correct number of frames). Did you ever encounter such an issue?The text was updated successfully, but these errors were encountered: