Back to Projects List
- Shelly Liu (Concord Academy)
- Jonah Berg (The Rivers School)
- Franklin King (BWH)
The goal is to localize a bronchoscope through the use of depth maps generated from bronchoscopy images using neural networks.
- Objective A. Produce point-cloud/models from depth maps.
- Objective B. Within slicer, register point-cloud/models to CT scan model.
- We will use data from a bronchoscopy on a phantom lung.
- We will generate depth maps using a technique by Marco Visentini-Scarzanella[1]
- We will then convert depth maps into point clouds
- Finally, we plan to use slicer to register point clouds to the CT scan.
- The steps we have already completed is the training and testing of the neural networks used to generate depth maps.
- We have converted a depth map into a point cloud.
- We have fixed the issue regarding the size and location of the point cloud relative to its actual position in the phantom lung.
- We also were able to register the point cloud to the CT scan in Slicer using Model/Surface Registration.
- The next step is to improve training so the predicted depth maps are more accurate.
From left to right: True RGB, True rendered RGB, True depth map, Predicted rendered RGB, Predicted depth map from predicted RGB, Predicted depth map from true RGB
Green: Reconstructed from depth map