Stabilization plays a central role in improving the quality of videos. However, current methods perform poorly under adverse conditions. In this paper, we propose a synthetic-aware adverse weather video stabilization algorithm that dispenses real data for training, relying solely on synthetic data. Our approach leverages specially generated synthetic data to avoid the feature extraction issues faced by current methods. To achieve this, we present a novel data generator to produce the required training data with an automatic ground-truth extraction procedure. We also propose a new dataset, VSAC105Real, and compare our method to five recent video stabilization algorithms using two benchmarks. Our method generalizes well on real-world videos across all weather conditions and does not require large-scale synthetic training data.
- Our video stabilization code can be downloaded using this link: Click
- The VSNC35Synth and VSAC65Synth datasets can be downloaded using these links: Part1 and Part2
- The Silver simulator can be downloaded using this link: Click
- Abdulrahman Kerim, PhD Student, at Lancaster University, a.kerim@lancaster.ac.uk
- Leandro Soriano Marcolino, Lecturer at Lancaster University, l.marcolino@lancaster.ac.uk
- The dataset and the framework are made freely available to academic and non-commercial purposes. They are provided “AS IS” without any warranty.
- If you use the dataset or the framework feel free to cite our work (paper link will be shared in the future).
A. Kerim is supported by the Faculty of Science and Technology - Lancaster University.