🔥 EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images


1Peking University
2Peng Cheng Laboratory
3University of Science and Technology of China
4Dalian University of Technology
*These authors contributed equally to this work.
†Corresponding author.

Comparison results of novel view synthesis on the proposed datasets.
(Due to file size limitations, we select certain scenarios to showcase the performance of our method.)

đź“Ť Abstract

3D Gaussian Splatting (3D-GS) has demonstrated exceptional capabilities in synthesizing novel views of 3D scenes. However, its training is heavily reliant on high-quality images and precise camera poses. Meeting these criteria can be challenging in non-ideal real-world conditions, where motion-blurred images frequently occur due to high-speed camera movements or low-light environments. To address these challenges, we introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel approach that harnesses event streams captured by event cameras to facilitate the learning of high-quality 3D-GS from blurred images. Capitalizing on the high temporal resolution and dynamic range offered by event streams, we seamlessly integrate them into the initialization and optimization of 3D-GS, thereby enhancing the acquisition of high-fidelity novel views with intricate texture details. To remedy the absence of evaluation benchmarks incorporating both event streams and RGB frames, we present two novel datasets comprising RGB frames, event streams, and corresponding camera parameters, featuring a wide variety of scenes and various camera motions. We then conduct a thorough evaluation of our method, comparing it with leading techniques on the provided benchmark. The comparison results reveal that our approach not only excels in generating high-fidelity novel views, but also offers faster training and inference speeds.

🚀 Proposed Method

We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), which leverages the event streams captured by event cameras to enhance the learning of high-quality 3D-GS from motion-blurred images. Harnessing the exceptional temporal resolution and dynamic range offered by event streams, we use them to assist in the initialization of 3D-GS, and incorporate them to jointly optimize 3D-GS and camera trajectories of blurry images through a blur reconstruction loss and an event reconstruction loss. Due to the geometric ambiguity caused by blurry images, we further propose two event-assisted depth regularization terms to stabilize the geometry of 3D-GS. Through optimizing the 3D-GS in a progressive manner, our method can recover a high-quality 3D-GS that facilitates the real-time generation of high-fidelity novel views.


Compression Pipeline
Figure 1: Overview of EvaGaussians.

🧸 Results

We evaluate our method on NeRF-Synthetic dataset and our proposed dataset for comparison. Our method overcomes the baselines in producing high-fidelity novel views, and significantly reducing training time as well as demonstrating substantial advantages in real-time application scenarios.


Compression Pipeline
Figure 1: Qualitative comparison on the synthetic and real dataset. We show the rendering novel views on the top section (a) and exhibit both novel view synthesis results and input view deblurring results on the bottom section (b). It shows that our method achieves better performance in recovering the training blurry views as well as rendering novel views.

Table 1. Quantitative results evaluated on NeRF-Synthetic, redesigned Deblur-NeRF, and our proposed datasets. We highlight the best-performing results in bold and the second-best results in underline for all compared methods.
Compression Pipeline

🥳 Demo Examples

Image comparisons for the baseline method EvdeblurNeRF and our proposed method reconstruction. All images are taken from thetest set.

Ours
23.71 dB
EvdeblurNeRF
22.23 dB
Ours
24.88 dB
EvdeblurNeRF
21.62 dB
Ours
30.26 dB
EvdeblurNeRF
29.69 dB
Ours
RankIQA↓ 5.09
EvdeblurNeRF
RankIQA↓ 5.25

đź’« Conclusion

We introduce Event Stream Assisted Gaussian Splatting (EvaGaussians), a novel framework that seamlessly integrates the event streams captured by an event camera into the training of 3D-GS, effectively addressing the challenges of reconstructing high-quality 3D-GS from motion-blurred images. We contribute two novel datasets and conduct comprehensive experiments. The results demonstrate that our method outperforms previous state-of-the-art deblurring rendering techniques in terms of novel view synthesis quality, without sacrificing inference efficiency. Despite its promising performance, our method may still face challenges when reconstructing scenes with extremely intricate textures from severely blurred images. We will release our code and dataset for future research.

BibTeX

@InProceedings{yu2024evagaussians,
                    title={EvaGaussians: Event Stream Assisted Gaussian Splatting from Blurry Images}, 
                    author={Wangbo Yu, Chaoran Feng, Jiye Tang, Jiashu Yang, Xu Jia, Yuchao Yang, Li Yuan and Yonghong Tian},
                    year={2024},
                    eprint={2405.20224},
                    archivePrefix={arXiv},
                    primaryClass={cs.CV},
                    url={https://arxiv.org/abs/2405.20224}, 
            }