You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
You are setting the same time_dim for all of layers, but the size of the temporal dimension is cut in half after each step in the UNet. Because of this, the model crashes when trying to reshape/rearrange the tensors for intermediate layers (for instance here (maybe others as well?):
ifis_video:
batch_size=x.shape[0]
x=rearrange(x, 'b c t h w -> b h w t c')
else:
assertexists(batch_size) orexists(self.time_dim)
rearrange_kwargs=dict(b=batch_size, t=self.time_dim)
x=rearrange(x, '(b t) c h w -> b h w t c', **compact_values(rearrange_kwargs))
I am working on my on workaround in the same set_time_dim function but thought I would report it in case it is helpful.
The text was updated successfully, but these errors were encountered:
I have been working through your code trying to get it working, and I believe I found an issue when you set the time_dim for the temporal layers here:
You are setting the same time_dim for all of layers, but the size of the temporal dimension is cut in half after each step in the UNet. Because of this, the model crashes when trying to reshape/rearrange the tensors for intermediate layers (for instance here (maybe others as well?):
I am working on my on workaround in the same set_time_dim function but thought I would report it in case it is helpful.
The text was updated successfully, but these errors were encountered: