Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Sizes of tensors must match except in dimension 1. #1

Open
cta5425 opened this issue Jul 12, 2024 · 0 comments
Open

RuntimeError: Sizes of tensors must match except in dimension 1. #1

cta5425 opened this issue Jul 12, 2024 · 0 comments

Comments

@cta5425
Copy link

cta5425 commented Jul 12, 2024

Hi Ulrich,

Many thanks for your contribution! I have a question regarding the methodology in general and another more specific question. Why did you choose to combine CINN with Variational Autoencoder and did not use a conditional variational Autoencoder?

Also, I tried to run the jupyter notebooks provided, they all worked fine, except in pose estimation example in pose_estimation.ipynb. It occured on the line: zz = torch.randn(batch_size, 12).to(device). I saw in some other commits, that you sat the batch_size = 1, when I do the same I avoid the error but the predictions are far away from true. When the batch_size = 32 as it was, I get the following error:

`
Ground truth pose:
tensor([[-0.9466, 0.0161, -0.3221, -1.1683],
[-0.3225, -0.0472, 0.9454, 3.4293],
[ 0.0000, 0.9988, 0.0498, 0.1808],
[ 0.0000, 0.0000, 0.0000, 1.0000]], device='cuda:0')

RuntimeError Traceback (most recent call last)
Cell In [13], line 99
97 # ------ Finally, we compare the results ------
98 print("Ground truth pose:\n",new_pose)
---> 99 pred_pose = tau.reverse(zz, z).squeeze(-1).squeeze(-1)
100 pred_pose = to_pose(pred_pose)
101 print("---------------\n Predicted pose: \n", pred_pose)

File ~/local-repo/AutoNeRF/models/cinn.py:80, in ConditionalTransformer.reverse(self, out, conditioning)
78 def reverse(self, out, conditioning):
79 embedding = self.embed(conditioning)
---> 80 return self.flow(out, embedding, reverse=True)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:326, in ConditionalFlow.forward(self, x, embedding, reverse)
324 else:
325 for i in reversed(range(self.n_flows)):
--> 326 x = self.sub_layers[i](x, hconds[i], reverse=True)
327 return x

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:235, in ConditionalFlatDoubleCouplingFlowBlock.forward(self, x, xcond, reverse)
233 h = x
234 h = self.shuffle(h, reverse=True)
--> 235 h = self.coupling(h, xcond, reverse=True)
236 h = self.activation(h, reverse=True)
237 h = self.norm_layer(h, reverse=True)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1532, in Module._wrapped_call_impl(self, *args, **kwargs)
1530 return self._compiled_call_impl(*args, **kwargs) # type: ignore[misc]
1531 else:
-> 1532 return self._call_impl(*args, **kwargs)

File ~/anaconda3/envs/nerfstudio/lib/python3.8/site-packages/torch/nn/modules/module.py:1541, in Module._call_impl(self, *args, **kwargs)
1536 # If we don't have any hooks, we want to skip the rest of the logic in
1537 # this function, and just call forward.
1538 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1539 or _global_backward_pre_hooks or _global_backward_hooks
1540 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1541 return forward_call(*args, **kwargs)
1543 try:
1544 result = None

File ~/local-repo/AutoNeRF/models/blocks.py:110, in ConditionalDoubleVectorCouplingBlock.forward(self, x, xc, reverse)
108 x = torch.cat(torch.chunk(x, 2, dim=1)[::-1], dim=1)
109 x = torch.chunk(x, 2, dim=1)
--> 110 conditioner_input = torch.cat((x[idx_apply], xc), dim=1)
111 x_ = (x[idx_keep] - self.ti) * self.si.neg().exp()
112 x = torch.cat((x[idx_apply], x_), dim=1)

RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 32 but got size 1 for tensor number 1 in the list.
`

Many thanks in advance.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant