Huiqiang Sun1, Xingyi Li1, Liao Shen1, Xinyi Ye1, Ke Xian2, Zhiguo Cao1*,
1School of AIA, Huazhong University of Science and Technology, 2School of EIC, Huazhong University of Science and Technology
This repository contains the official PyTorch implementation of our CVPR 2024 paper "DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video".
git clone https://github.com/huiqiang-sun/DyBluRF.git
cd DyBluRF
conda create -n dyblurf python=3.7
conda activate dyblurf
pip install -r requirements.txt
The dataset consists of 6 dynamic scenes with motion blur. You can download this dataset from this link.
Each scene contains the following contents:
images
: blurry image sequence from left camera.images_xxx
: resized blurry images from left camera.disp
: depth map of the blurry images.flow_i1
: optical flow of the blurry images.motion_masks
: coarse motion mask of the blurry images.sharp_images
: sharp image sequence from left camera.inference_images
: sharp image sequence from right camera.poses_bounds.npy
: camera poses of left blurry images computed by colmap.
Note: The camera parameters in poses_bounds.npy
are arranged alternately for left and right cameras according to the time sequence of the video frames.
python train.py --config configs/stereo_blur_dataset/xxx.txt
If you find our work useful in your research, please consider to cite our paper:
@article{sun2024_dyblurf,
title={DyBluRF: Dynamic Neural Radiance Fields from Blurry Monocular Video},
author={Sun, Huiqiang and Li, Xingyi and Shen, Liao and Ye, Xinyi and Xian, Ke and Cao, Zhiguo},
journal={arXiv preprint arXiv:2403.10103},
year={2024}
}