-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Very interesting system! Some small findings and a question #1
Comments
Hi,
Thank you very much for your kind message, and for having tried out the
toolkit! It's still in an early stage, but we hope to make it more robust
in the coming months.
1. You are absolutely right, sorry about that! I just uploaded your fix to
the GitHub. I had put the "PerViewMeshesQSTR.compute" file in a "Resources"
subfolder two days ago and had forgotten to update the relative paths.
Thanks for notifying me! :)
2. Ah, interesting. I haven't had this issue on my machines yet, but it's
good to know, and thank you for the fix. Just to make sure I understood the
issue correctly: you ran the reconstruction from within COLIBRI VR in
Unity, it failed because it was missing a dll, but in the Unity console it
indicated that it was successful and didn't raise an error? I have to try
to reproduce this, to at least be able to raise a helpful error in the
Unity console when it occurs.
3. The file structure when using the helper script for COLMAP should be:
- Before reconstruction: Data / Dataset / images. The "Source data"
folder is Data / Dataset.
- After sparse reconstruction: Data / Dataset / sparse (this contains
any COLMAP camera, e.g. SIMPLE_RADIAL), but also Data / Dataset / dense / 0
/ sparse (this contains the undistorted camera, which should be PINHOLE).
The Data / Dataset / dense / 0 folder thus becomes the main folder for the
dense reconstruction step.
So after sparse reconstruction, the "Source data" folder of the Processing
component should automatically be changed to Data / Dataset / dense / 0.
And if you get the "COLMAP camera not currently supported" error message it
probably means that this was not the case, i.e. that it was still looking
at the Data / Dataset folder (in which there was indeed a SIMPLE_RADIAL
camera, instead of the expected PINHOLE). So I have to fix this. Thanks
again!
4. To be sure I understand correctly: you ran the sparse reconstruction
using COLMAP, to get the 3D camera setup; did you also run dense
reconstruction? If only sparse reconstruction: only the "focal surface"
rendering methods should be available (no depth data or 3D mesh, so indeed
less than in the video tutorial). If dense reconstruction: the generated
.ply mesh has to be converted to a .obj format to be readable by Unity,
which you can do e.g. with Blender (for instance using the dedicated helper
class), and then more processing/rendering methods will become available
(not the depth map ones yet though, I still have to work to import the .bin
depth maps generated by COLMAP during dense reconstruction).
Concerning changing the focal length, did modifying the slider affect the
output view at all? If not, you may want to update the package from GitHub,
I added an important fix on this point yesterday (it was indeed broken
before). If it did affect the output view but the images were still too
small even at the largest focal length, you can increase the max focal
length by changing the value of the "Focal bounds" parameter.
Thank you so much for all of this feedback! If you're interested in
experimenting further, keep checking the YouTube tutorials, in the next few
days should be coming:
- A tutorial giving the full pipeline from images to rendered light field.
Input data: the Amethyst dataset of the Stanford light field archive (
http://lightfield.stanford.edu/lfs.html).
- A tutorial giving the full pipeline from images to 3D mesh rendered with
unstructured lumigraph rendering. Input data: the Terrains dataset from
ETH3D (https://www.eth3d.net/datasets).
Hopefully these will help! I expect to catch a few bugs by redoing these
full pipelines, so the toolkit should be a bit more robust by the end of
this process.
Best regards,
Greg
…On Thu, Apr 2, 2020 at 2:46 PM Lex van der Sluijs ***@***.***> wrote:
Hi,
Thanks for the great work and making this available! I've been looking
into lightfield technology and how it can be used to capture and play back
3D scenes (in VR, AR, regular 3D) using readily available technology, and
this looks like a great toolbox for that! And it's even integrated into
Unity already, wow.
Here are a few small things I had to do to get it up and running on my
computer, and at the end a question on the usage:
1. In PerViewMeshesQSTR.compute I replaced the first dot in the cginc
file names with a double dot, otherwise the file couldn't be found.
#include "../../CGIncludes/CoreCG.cginc"
#include "../../CGIncludes/CameraCG.cginc"
#include "../../CGIncludes/ColorCG.cginc"
1.
When I ran a sparse reconstruction the first time, it completed very
quickly and then said it was succesful, but actually there were no results
in the sparse folder. I ran the command line from the Console in a command
prompt and found that Colmap was missing glew32.dll. So I ended up copying
the files in the Colmap 'lib' folder into the 'bin' folder and that solved
it :-)
2.
After computing a reconstruction of the 'door' dataset, I was a bit
surprised to see this message: COLMAP camera type SIMPLE_RADIAL is not
currently supported by COLIBRI VR.. Suggestion to put the currently
unsupported camera models between brackets in the Colmap Editor panel (or
something like that)
3.
I ran the reconstruction again, with the undistorted images and
PINHOLE camera model, and got a good result I think. Just a question: when
I viewed it using the Rendering script, I could not see a result larger
than in the attached image <https://ibb.co/6Ftcj24>, by varying the
Focal length and Max blend angle parameters. The list of Blending methods
is also smaller than in the tutorial video, maybe because the downloadable
unitypackage is a bit older than the development version?
Looking forward to doing additional experiments with it (e.g. to export to
a VR experience) !
Lex
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#1>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AD7FDDFNUQOZR3PEUNCUQJ3RKSBXBANCNFSM4L2MDX4Q>
.
|
Hi Greg, thanks for your fast reply! Re 2: yes, that is correct. Perhaps you have the 'lib' folder of Colmap in your path? I just had a look to see how it's possible that Colmap itself works. Turns out that in COLMAP.BAT the 'lib' folder is added to the path on the fly:
Re 3: Aha, it's good to know that the system automatically changes that folder. I may have changed this to another folder myself, trying to understand how to get everything running. The remark about seeing small squares is based on the situation where the folder is set correctly, so \dense\0, so I must have changed it back. Thanks for the tip. The folder structure you describe is indeed created, and in the sparse folder there is a cameras.txt which indeed has a PINHOLE camera.
Then in Data processing I check the top two (available) checkmarks:
In the Rendering script I then have three options:
Re 4: Yes, when I use the focal length slider or the Max. blend angle slider, the size of the circle inside the black square changes. I can also modify the focal bounds, but this does not have a clearly visible effect. A small hypothesis I have is that maybe your pipeline indeed works fully when using synthetic images, but maybe there is a bug when using photographs? As you say, you most likely will see it when recording the last tutorial(s), so I will be on the lookout for them! :-) When I have some results of my own I will be happy to share them. All the best, and thanks again, |
Hi again Lex, The tutorial videos are finally online! Sorry this took so long, my workload has kept me quite busy of late. I hope that they can be of use! Making the videos has helped me notice and repair quite a few bugs in the processing pipeline, so it should now run more smoothly. And there are a few improvements (e.g. recovered 3D mesh appearing as a preview in the Scene view during processing). Thanks again for your feedback! |
Hi,
Thanks for the great work and making this available! I've been looking into lightfield technology and how it can be used to capture and play back 3D scenes (in VR, AR, regular 3D) using readily available technology, and this looks like a great toolbox for that! And it's even integrated into Unity already, wow.
Here are a few small things I had to do to get it up and running on my computer, and at the end a question on the usage:
When I ran a sparse reconstruction the first time, it completed very quickly and then said it was succesful, but actually there were no results in the sparse folder. I ran the command line from the Console in a command prompt and found that Colmap was missing glew32.dll. So I ended up copying the files in the Colmap 'lib' folder into the 'bin' folder and that solved it :-)
After computing a reconstruction of the 'door' dataset, I was a bit surprised to see this message:
COLMAP camera type SIMPLE_RADIAL is not currently supported by COLIBRI VR.
. Suggestion to put the currently unsupported camera models between brackets in the Colmap Editor panel (or something like that)I ran the reconstruction again, with the undistorted images and PINHOLE camera model, and got a good result I think. Just a question: when I viewed it using the Rendering script, I could not see a result larger than in the attached image , by varying the Focal length and Max blend angle parameters. The list of Blending methods is also smaller than in the tutorial video, maybe because the downloadable unitypackage is a bit older than the development version?
Looking forward to doing additional experiments with it (e.g. to export to a VR experience) !
Lex
The text was updated successfully, but these errors were encountered: