You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This error message appears when the total number of training images modulo batch size does not equal zero. For instance, if I had 50 images with a batch-size of 8, it would make 6 batches, but there would be a leftover batch size of 2, which throws off the tensor shape (expected: [8, nc, height, width], observed: [2, nc, height, width]. What's the best way to overcome this issue?
The text was updated successfully, but these errors were encountered:
RuntimeError: The expanded size of the tensor (3) must match the existing size (4) at non-singleton dimension 1. Target sizes: [1, 3, 256, 256]. Tensor sizes: [1, 4, 256, 256]
I have 1239 images in ./train/A and 4952 in ./train/B
I made a temporary fix skipping that batch. Add the following before setting the model input in train.py line #97:
# Skip the final batch when the total number of training images modulo batch-size does not equal zero
if len(batch['A']) != opt.batchSize or len(batch['B']) != opt.batchSize:
continue #TODO
This error message appears when the total number of training images modulo batch size does not equal zero. For instance, if I had 50 images with a batch-size of 8, it would make 6 batches, but there would be a leftover batch size of 2, which throws off the tensor shape (expected: [8, nc, height, width], observed: [2, nc, height, width]. What's the best way to overcome this issue?
The text was updated successfully, but these errors were encountered: