You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have some questions mostly regarding the transformations you used.
In lib/data/TrainDataset.py you create the calibration matrix in get_render. You create the uv_intrinsic matrix with: 1.0 / float(self.opt.loadSize // 2), where opt.loadSize = 512. However the feature maps from which you sample are 128 x 128. Wouldn't be more correct to use opt.loadSize = 128 or it doesn't matter?
Your calib matrix is a 4x4 matrix where the last row is [0, 0, 0, 1]. This means that the input 3d points should be in homogeneous coordinates. However your 3d points are of size [B, 3, N]. Furthermore your calib matrix is 4x4 but the documentation of the query function in lib/model/HGPIFuNet is:
:param points: [B, 3, N] world space coordinates of points
:param calibs: [B, 3, 4] calibration matrices for each image
:param transforms: Optional [B, 2, 3] image space coordinate transforms
Thanks in advance for your time.
The text was updated successfully, but these errors were encountered:
Janudis
changed the title
Image Encoder: Hourglass Architecture
Transformation Questions
Nov 23, 2023
I have some questions mostly regarding the transformations you used.
:param points: [B, 3, N] world space coordinates of points
:param calibs: [B, 3, 4] calibration matrices for each image
:param transforms: Optional [B, 2, 3] image space coordinate transforms
Thanks in advance for your time.
The text was updated successfully, but these errors were encountered: