diff --git a/README.md b/README.md index adc8845..cc44b4d 100644 --- a/README.md +++ b/README.md @@ -2,23 +2,24 @@ A curated list of latest research papers, projects and resources related to Gaussian Splatting. Content is automatically updated daily. -> Last Update: 2024-12-08 00:55:16 +> Last Update: 2024-12-08 02:57:05 ## Categories -- [3DGS Original](#3dgs-original) (2 papers) - Original 3D Gaussian Splatting papers -- [3DGS Surveys](#3dgs-surveys) (16 papers) - Survey papers and benchmarks about 3D Gaussian Splatting -- [Acceleration](#acceleration) (305 papers) - Papers about speeding up rendering or training -- [Applications](#applications) (996 papers) - Papers about specific applications -- [Avatar Generation](#avatar-generation) (339 papers) - Papers about human avatar generation -- [Dynamic Scene](#dynamic-scene) (371 papers) - Papers about dynamic scene reconstruction and rendering -- [Few-shot](#few-shot) (70 papers) - Papers about few-shot or sparse view reconstruction -- [Geometry Reconstruction](#geometry-reconstruction) (342 papers) - Papers about 3D geometry reconstruction -- [Large Scene](#large-scene) (57 papers) - Papers about large-scale scene reconstruction -- [Model Compression](#model-compression) (356 papers) - Papers about model compression and optimization -- [Quality Enhancement](#quality-enhancement) (173 papers) - Papers focusing on improving rendering quality -- [SLAM](#slam) (150 papers) - Papers about SLAM using Gaussian Splatting -- [Scene Understanding](#scene-understanding) (175 papers) - Papers about scene understanding and semantic analysis +- [3DGS Surveys](#3dgs-surveys) (19 papers) - Survey papers and benchmarks about 3D Gaussian Splatting +- [Acceleration](#acceleration) (321 papers) - Papers about speeding up rendering or training +- [Applications](#applications) (1084 papers) - Papers about specific applications +- [Avatar Generation](#avatar-generation) (366 papers) - Papers about human avatar generation +- [Dynamic Scene](#dynamic-scene) (394 papers) - Papers about dynamic scene reconstruction and rendering +- [Few-shot](#few-shot) (73 papers) - Papers about few-shot or sparse view reconstruction +- [Geometry Reconstruction](#geometry-reconstruction) (364 papers) - Papers about 3D geometry reconstruction +- [Large Scene](#large-scene) (59 papers) - Papers about large-scale scene reconstruction +- [Model Compression](#model-compression) (390 papers) - Papers about model compression and optimization +- [Quality Enhancement](#quality-enhancement) (180 papers) - Papers focusing on improving rendering quality +- [Ray Tracing](#ray-tracing) (24 papers) - Papers about ray tracing and ray casting in Gaussian Splatting +- [Relighting](#relighting) (121 papers) - Papers about relighting and illumination effects in Gaussian Splatting +- [SLAM](#slam) (158 papers) - Papers about SLAM using Gaussian Splatting +- [Scene Understanding](#scene-understanding) (185 papers) - Papers about scene understanding and semantic analysis @@ -36,51 +37,40 @@ A curated list of latest research papers, projects and resources related to Gaus ## Categorized Papers -### 3DGS Original - -- **[StopThePop: Sorted Gaussian Splatting for View-Consistent Real-time Rendering](https://arxiv.org/abs/2402.00525v3)** - Authors: Lukas Radl, Michael Steiner, Mathias Parger, Alexander Weinrauch, Bernhard Kerbl, Markus Steinberger - Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2402.00525v3.pdf) - Keywords: gaussian splatting, ar, real-time rendering, fast, motion, 3d gaussian, head, original gaussian splatting -- **[GPS-Gaussian: Generalizable Pixel-wise 3D Gaussian Splatting for Real-time Human Novel View Synthesis](https://arxiv.org/abs/2312.02155v3)** - Authors: Shunyuan Zheng, Boyao Zhou, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu - Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2312.02155v3.pdf) - Keywords: human, gaussian splatting, ar, 3d gaussian, sparse-view, original gaussian splatting - ### 3DGS Surveys - **[Adversarial Attacks Using Differentiable Rendering: A Survey](https://arxiv.org/abs/2411.09749v1)** Authors: Matthew Hull, Chao Zhang, Zsolt Kira, Duen Horng Chau Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.09749v1.pdf) - Keywords: gaussian splatting, ar, 3d gaussian, survey, recognition + Keywords: 3d gaussian, gaussian splatting, recognition, survey, ar, illumination - **[Neural Fields in Robotics: A Survey](https://arxiv.org/abs/2410.20220v1)** Authors: Muhammad Zubair Irshad, Mauro Comi, Yen-Chen Lin, Nick Heppert, Abhinav Valada, Rares Ambrus, Zsolt Kira, Jonathan Tremblay Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.20220v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://robonerf.github.io) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, nerf, autonomous driving, survey, robotics, compact, high-fidelity, semantic + Keywords: geometry, dynamic, gaussian splatting, lighting, high-fidelity, survey, 3d reconstruction, nerf, robotics, ar, semantic, autonomous driving, compact - **[3D Gaussian Splatting in Robotics: A Survey](https://arxiv.org/abs/2410.12262v1)** Authors: Siting Zhu, Guangming Wang, Dezhi Kong, Hesheng Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.12262v1.pdf) - Keywords: gaussian splatting, ar, real-time rendering, nerf, 3d gaussian, survey, robotics, understanding + Keywords: 3d gaussian, gaussian splatting, survey, nerf, understanding, robotics, ar, real-time rendering - **[3D Representation Methods: A Survey](https://arxiv.org/abs/2410.06475v1)** Authors: Zhengren Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.06475v1.pdf) - Keywords: gaussian splatting, ar, nerf, 3d gaussian, survey, high-fidelity + Keywords: 3d gaussian, gaussian splatting, lighting, high-fidelity, survey, nerf, ar - **[Learning-based Multi-View Stereo: A Survey](https://arxiv.org/abs/2408.15235v1)** Authors: Fangjinhua Wang, Qingtian Zhu, Di Chang, Quankai Gao, Junlin Han, Tong Zhang, Richard Hartley, Marc Pollefeys Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2408.15235v1.pdf) - Keywords: gaussian splatting, ar, 3d reconstruction, nerf, 3d gaussian, autonomous driving, vr, survey, robotics + Keywords: 3d gaussian, gaussian splatting, vr, survey, 3d reconstruction, nerf, robotics, ar, autonomous driving - **[DESI Peculiar Velocity Survey -- Fundamental Plane](https://arxiv.org/abs/2408.13842v1)** Authors: Khaled Said, Cullan Howlett, Tamara Davis, John Lucey, Christoph Saulder, Kelly Douglass, Alex G. Kim, Anthony Kremin, Caitlin Ross, Greg Aldering, Jessica Nicole Aguilar, Steven Ahlen, Segev BenZvi, Davide Bianchi, David Brooks, Todd Claybaugh, Kyle Dawson, Axel de la Macorra, Biprateep Dey, Peter Doel, Kevin Fanning, Simone Ferraro, Andreu Font-Ribera, Jaime E. Forero-Romero, Enrique Gaztañaga, Satya Gontcho A Gontcho, Julien Guy, Klaus Honscheid, Robert Kehoe, Theodore Kisner, Andrew Lambert, Martin Landriau, Laurent Le Guillou, Marc Manera, Aaron Meisner, Ramon Miquel, John Moustakas, Andrea Muñoz-Gutiérrez, Adam Myers, Jundan Nie, Nathalie Palanque-Delabrouille, Will Percival, Francisco Prada, Graziano Rossi, Eusebio Sanchez, David Schlegel, Michael Schubnell, Joseph Harry Silber, David Sprayberry, Gregory Tarlé, Mariana Vargas Magana, Benjamin Alan Weaver, Risa Wechsler, Zhimin Zhou, Hu Zou Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2408.13842v1.pdf) - Keywords: ar, survey, 3d gaussian + Keywords: 3d gaussian, ar, survey - **[3D Gaussian Splatting: Survey, Technologies, Challenges, and Opportunities](https://arxiv.org/abs/2407.17418v1)** Authors: Yanqi Bao, Tianyu Ding, Jing Huo, Yaoli Liu, Yuxin Li, Wenbin Li, Yang Gao, Jiebo Luo Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2407.17418v1.pdf) - Keywords: gaussian splatting, ar, real-time rendering, 3d gaussian, efficient, survey, understanding + Keywords: 3d gaussian, gaussian splatting, survey, understanding, efficient, ar, real-time rendering - **[Survey on Fundamental Deep Learning 3D Reconstruction Techniques](https://arxiv.org/abs/2407.08137v1)** Authors: Yonge Bai, LikHang Wong, TszYin Twan Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2407.08137v1.pdf) - Keywords: gaussian splatting, ar, 3d reconstruction, nerf, 3d gaussian, survey + Keywords: 3d gaussian, gaussian splatting, lighting, survey, 3d reconstruction, nerf, ar - **[Panopticon: a telescope for our times](https://arxiv.org/abs/2407.05103v2)** Authors: Will Saunders, Timothy Chin, Michael Goodwin Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2407.05103v2.pdf) @@ -88,502 +78,590 @@ A curated list of latest research papers, projects and resources related to Gaus - **[3DGS.zip: A survey on 3D Gaussian Splatting Compression Methods](https://arxiv.org/abs/2407.09510v4)** Authors: Milena T. Bagdasarian, Paul Knoll, Yi-Hsin Li, Florian Barthel, Anna Hilsmann, Peter Eisert, Wieland Morgenstern Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2407.09510v4.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://w-m.github.io/3dgs-compression-survey/) - Keywords: gaussian splatting, compression, ar, 3d gaussian, efficient, head, survey, compact + Keywords: head, 3d gaussian, gaussian splatting, survey, efficient, ar, compact, compression ### Acceleration -*Showing the latest 50 out of 305 papers* +*Showing the latest 50 out of 321 papers* - **[Turbo3D: Ultra-fast Text-to-3D Generation](https://arxiv.org/abs/2412.04470v1)** Authors: Hanzhe Hu, Tianwei Yin, Fujun Luan, Yiwei Hu, Hao Tan, Zexiang Xu, Sai Bi, Shubham Tulsiani, Kai Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04470v1.pdf) - Keywords: gaussian splatting, ar, fast, efficient + Keywords: fast, efficient, gaussian splatting, ar - **[QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos](https://arxiv.org/abs/2412.04469v1)** Authors: Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04469v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://research.nvidia.com/labs/amri/projects/queen) - Keywords: dynamic, gaussian splatting, ar, high quality, fast, 3d gaussian, efficient + Keywords: dynamic, 3d gaussian, gaussian splatting, efficient, ar, fast, high quality - **[Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps](https://arxiv.org/abs/2412.04457v1)** Authors: Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, James Tompkin, Adam W. Harley Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04457v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://lynl7130.github.io/MonoDyGauBench.github.io/) - Keywords: dynamic, gaussian splatting, ar, fast, motion + Keywords: dynamic, motion, gaussian splatting, ar, fast - **[InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models](https://arxiv.org/abs/2412.03934v1)** Authors: Yifan Lu, Xuanchi Ren, Jiawei Yang, Tianchang Shen, Zhangjie Wu, Jun Gao, Yue Wang, Siheng Chen, Mike Chen, Sanja Fidler, Jiahui Huang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03934v1.pdf) - Keywords: dynamic, ar, fast, 3d gaussian + Keywords: fast, 3d gaussian, ar, dynamic - **[DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction](https://arxiv.org/abs/2412.03910v1)** Authors: Xuesong Li, Jinguang Tong, Jie Hong, Vivien Rolland, Lars Petersson Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03910v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, fast, face + Keywords: geometry, dynamic, gaussian splatting, 3d reconstruction, ar, face, fast - **[Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting](https://arxiv.org/abs/2412.03121v1)** Authors: Yijia Guo, Wenkai Huang, Yang Li, Gaolei Li, Hang Zhang, Liwen Hu, Jianhua Li, Tiejun Huang, Lei Ma Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03121v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://water-gs.github.io.) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, mapping + Keywords: 3d gaussian, gaussian splatting, mapping, 3d reconstruction, efficient, ar, fast - **[Gaussian Splatting Under Attack: Investigating Adversarial Noise in 3D Objects](https://arxiv.org/abs/2412.02803v1)** Authors: Abdurrahman Zeybey, Mehmet Ergezer, Tommy Nguyen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02803v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, autonomous driving, robotics + Keywords: 3d gaussian, gaussian splatting, human, robotics, ar, autonomous driving, fast - **[AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction](https://arxiv.org/abs/2412.02684v1)** Authors: Lingteng Qiu, Shenhao Zhu, Qi Zuo, Xiaodong Gu, Yuan Dong, Junfei Zhang, Chao Xu, Zhe Li, Weihao Yuan, Liefeng Bo, Guanying Chen, Zilong Dong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02684v1.pdf) - Keywords: animation, gaussian splatting, 4d, human, 3d reconstruction, real-time rendering, ar, avatar, efficient + Keywords: animation, gaussian splatting, 3d reconstruction, human, avatar, 4d, efficient, ar, real-time rendering - **[SparseGrasp: Robotic Grasping via 3D Semantic Gaussian Splatting from Sparse Multi-View RGB Images](https://arxiv.org/abs/2412.02140v1)** Authors: Junqiu Yu, Xinlin Ren, Yongchong Gu, Haitao Lin, Tianyu Wang, Yi Zhu, Hang Xu, Yu-Gang Jiang, Xiangyang Xue, Yanwei Fu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02140v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, efficient, sparse-view, semantic + Keywords: 3d gaussian, gaussian splatting, human, efficient, ar, semantic, sparse-view, fast - **[Planar Gaussian Splatting](https://arxiv.org/abs/2412.01931v1)** Authors: Farhad G. Zanjani, Hong Cai, Hanno Ackermann, Leila Mirvakhabova, Fatih Porikli Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01931v1.pdf) - Keywords: gaussian splatting, segmentation, ar, geometry, fast, face, neural rendering + Keywords: geometry, gaussian splatting, segmentation, ar, face, neural rendering, fast ### Applications -*Showing the latest 50 out of 996 papers* +*Showing the latest 50 out of 1084 papers* - **[Turbo3D: Ultra-fast Text-to-3D Generation](https://arxiv.org/abs/2412.04470v1)** Authors: Hanzhe Hu, Tianwei Yin, Fujun Luan, Yiwei Hu, Hao Tan, Zexiang Xu, Sai Bi, Shubham Tulsiani, Kai Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04470v1.pdf) - Keywords: gaussian splatting, ar, fast, efficient + Keywords: fast, efficient, gaussian splatting, ar - **[QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos](https://arxiv.org/abs/2412.04469v1)** Authors: Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04469v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://research.nvidia.com/labs/amri/projects/queen) - Keywords: dynamic, gaussian splatting, ar, high quality, fast, 3d gaussian, efficient + Keywords: dynamic, 3d gaussian, gaussian splatting, efficient, ar, fast, high quality - **[Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering](https://arxiv.org/abs/2412.04459v1)** Authors: Cheng Sun, Jaesung Choe, Charles Loop, Wei-Chiu Ma, Yu-Chiang Frank Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04459v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, 3d gaussian, efficient, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, 4d, efficient, ar - **[Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps](https://arxiv.org/abs/2412.04457v1)** Authors: Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, James Tompkin, Adam W. Harley Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04457v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://lynl7130.github.io/MonoDyGauBench.github.io/) - Keywords: dynamic, gaussian splatting, ar, fast, motion + Keywords: dynamic, motion, gaussian splatting, ar, fast - **[PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars](https://arxiv.org/abs/2412.04433v1)** Authors: Shota Sasaki, Jane Wu, Ko Nishino Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04433v1.pdf) - Keywords: dynamic, gaussian splatting, human, ar, avatar, motion, deformation, 3d gaussian, body + Keywords: dynamic, 3d gaussian, motion, gaussian splatting, deformation, human, avatar, ar, body - **[EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding](https://arxiv.org/abs/2412.04380v1)** Authors: Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, Jiwen Lu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04380v1.pdf) | [![GitHub](https://img.shields.io/github/stars/YkiWu/EmbodiedOcc?style=social)](https://github.com/YkiWu/EmbodiedOcc) - Keywords: human, ar, 3d gaussian, efficient, understanding, semantic + Keywords: 3d gaussian, human, understanding, efficient, ar, semantic - **[InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models](https://arxiv.org/abs/2412.03934v1)** Authors: Yifan Lu, Xuanchi Ren, Jiawei Yang, Tianchang Shen, Zhangjie Wu, Jun Gao, Yue Wang, Siheng Chen, Mike Chen, Sanja Fidler, Jiahui Huang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03934v1.pdf) - Keywords: dynamic, ar, fast, 3d gaussian + Keywords: fast, 3d gaussian, ar, dynamic - **[Multi-View Pose-Agnostic Change Localization with Zero Labels](https://arxiv.org/abs/2412.03911v1)** Authors: Chamuditha Jayanga Galappaththige, Jason Lai, Lloyd Windrim, Donald Dansereau, Niko Suenderhauf, Dimity Miller Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03911v1.pdf) - Keywords: gaussian splatting, ar, localization, 3d gaussian + Keywords: 3d gaussian, lighting, gaussian splatting, ar, localization - **[DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction](https://arxiv.org/abs/2412.03910v1)** Authors: Xuesong Li, Jinguang Tong, Jie Hong, Vivien Rolland, Lars Petersson Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03910v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, fast, face + Keywords: geometry, dynamic, gaussian splatting, 3d reconstruction, ar, face, fast - **[HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting](https://arxiv.org/abs/2412.03844v1)** Authors: Jingyu Lin, Jiaqi Gu, Lubin Fan, Bojian Wu, Yujing Lou, Renjie Chen, Ligang Liu, Jieping Ye Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03844v1.pdf) - Keywords: gaussian splatting, ar, outdoor, 3d gaussian + Keywords: 3d gaussian, outdoor, gaussian splatting, ar ### Avatar Generation -*Showing the latest 50 out of 339 papers* +*Showing the latest 50 out of 366 papers* - **[PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars](https://arxiv.org/abs/2412.04433v1)** Authors: Shota Sasaki, Jane Wu, Ko Nishino Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04433v1.pdf) - Keywords: dynamic, gaussian splatting, human, ar, avatar, motion, deformation, 3d gaussian, body + Keywords: dynamic, 3d gaussian, motion, gaussian splatting, deformation, human, avatar, ar, body - **[EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding](https://arxiv.org/abs/2412.04380v1)** Authors: Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, Jiwen Lu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04380v1.pdf) | [![GitHub](https://img.shields.io/github/stars/YkiWu/EmbodiedOcc?style=social)](https://github.com/YkiWu/EmbodiedOcc) - Keywords: human, ar, 3d gaussian, efficient, understanding, semantic + Keywords: 3d gaussian, human, understanding, efficient, ar, semantic - **[DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction](https://arxiv.org/abs/2412.03910v1)** Authors: Xuesong Li, Jinguang Tong, Jie Hong, Vivien Rolland, Lars Petersson Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03910v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, fast, face + Keywords: geometry, dynamic, gaussian splatting, 3d reconstruction, ar, face, fast - **[Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction](https://arxiv.org/abs/2412.03473v1)** Authors: Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03473v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, urban scene, face, deformation, semantic + Keywords: dynamic, gaussian splatting, deformation, urban scene, 4d, ar, face, semantic - **[2DGS-Room: Seed-Guided 2D Gaussian Splatting with Geometric Constrains for High-Fidelity Indoor Scene Reconstruction](https://arxiv.org/abs/2412.03428v1)** Authors: Wanting Zhang, Haodong Xiang, Zhichao Liao, Xiansong Lai, Xinghui Li, Long Zeng Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03428v1.pdf) - Keywords: dynamic, gaussian splatting, ar, face, 3d gaussian, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, ar, face - **[Volumetrically Consistent 3D Gaussian Rasterization](https://arxiv.org/abs/2412.03378v1)** Authors: Chinmay Talegaonkar, Yash Belhe, Ravi Ramamoorthi, Nicholas Antipa Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03378v1.pdf) - Keywords: gaussian splatting, face, ar, 3d gaussian + Keywords: 3d gaussian, ar, gaussian splatting, face - **[Gaussian Splatting Under Attack: Investigating Adversarial Noise in 3D Objects](https://arxiv.org/abs/2412.02803v1)** Authors: Abdurrahman Zeybey, Mehmet Ergezer, Tommy Nguyen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02803v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, autonomous driving, robotics + Keywords: 3d gaussian, gaussian splatting, human, robotics, ar, autonomous driving, fast - **[AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction](https://arxiv.org/abs/2412.02684v1)** Authors: Lingteng Qiu, Shenhao Zhu, Qi Zuo, Xiaodong Gu, Yuan Dong, Junfei Zhang, Chao Xu, Zhe Li, Weihao Yuan, Liefeng Bo, Guanying Chen, Zilong Dong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02684v1.pdf) - Keywords: animation, gaussian splatting, 4d, human, 3d reconstruction, real-time rendering, ar, avatar, efficient + Keywords: animation, gaussian splatting, 3d reconstruction, human, avatar, 4d, efficient, ar, real-time rendering - **[Towards Rich Emotions in 3D Avatars: A Text-to-3D Avatar Generation Benchmark](https://arxiv.org/abs/2412.02508v1)** Authors: Haidong Xu, Meishan Zhang, Hao Ju, Zhedong Zheng, Hongyuan Zhu, Erik Cambria, Min Zhang, Hao Fei Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02508v1.pdf) - Keywords: dynamic, human, ar, avatar, motion, 3d gaussian, mapping + Keywords: dynamic, 3d gaussian, motion, mapping, human, avatar, ar - **[TimeWalker: Personalized Neural Space for Lifelong Head Avatars](https://arxiv.org/abs/2412.02421v1)** Authors: Dongwei Pan, Yang Li, Hongsheng Li, Kwan-Yee Lin Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02421v1.pdf) - Keywords: dynamic, gaussian splatting, animation, human, ar, avatar, motion, deformation, head, compact + Keywords: dynamic, motion, head, animation, gaussian splatting, deformation, human, avatar, ar, compact ### Dynamic Scene -*Showing the latest 50 out of 371 papers* +*Showing the latest 50 out of 394 papers* - **[QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos](https://arxiv.org/abs/2412.04469v1)** Authors: Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04469v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://research.nvidia.com/labs/amri/projects/queen) - Keywords: dynamic, gaussian splatting, ar, high quality, fast, 3d gaussian, efficient + Keywords: dynamic, 3d gaussian, gaussian splatting, efficient, ar, fast, high quality - **[Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering](https://arxiv.org/abs/2412.04459v1)** Authors: Cheng Sun, Jaesung Choe, Charles Loop, Wei-Chiu Ma, Yu-Chiang Frank Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04459v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, 3d gaussian, efficient, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, 4d, efficient, ar - **[Monocular Dynamic Gaussian Splatting is Fast and Brittle but Smooth Motion Helps](https://arxiv.org/abs/2412.04457v1)** Authors: Yiqing Liang, Mikhail Okunev, Mikaela Angelina Uy, Runfeng Li, Leonidas Guibas, James Tompkin, Adam W. Harley Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04457v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://lynl7130.github.io/MonoDyGauBench.github.io/) - Keywords: dynamic, gaussian splatting, ar, fast, motion + Keywords: dynamic, motion, gaussian splatting, ar, fast - **[PBDyG: Position Based Dynamic Gaussians for Motion-Aware Clothed Human Avatars](https://arxiv.org/abs/2412.04433v1)** Authors: Shota Sasaki, Jane Wu, Ko Nishino Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04433v1.pdf) - Keywords: dynamic, gaussian splatting, human, ar, avatar, motion, deformation, 3d gaussian, body + Keywords: dynamic, 3d gaussian, motion, gaussian splatting, deformation, human, avatar, ar, body - **[InfiniCube: Unbounded and Controllable Dynamic 3D Driving Scene Generation with World-Guided Video Models](https://arxiv.org/abs/2412.03934v1)** Authors: Yifan Lu, Xuanchi Ren, Jiawei Yang, Tianchang Shen, Zhangjie Wu, Jun Gao, Yue Wang, Siheng Chen, Mike Chen, Sanja Fidler, Jiahui Huang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03934v1.pdf) - Keywords: dynamic, ar, fast, 3d gaussian + Keywords: fast, 3d gaussian, ar, dynamic - **[DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction](https://arxiv.org/abs/2412.03910v1)** Authors: Xuesong Li, Jinguang Tong, Jie Hong, Vivien Rolland, Lars Petersson Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03910v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, fast, face + Keywords: geometry, dynamic, gaussian splatting, 3d reconstruction, ar, face, fast - **[Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos](https://arxiv.org/abs/2412.03526v1)** Authors: Hanxue Liang, Jiawei Ren, Ashkan Mirzaei, Antonio Torralba, Ziwei Liu, Igor Gilitschenski, Sanja Fidler, Cengiz Oztireli, Huan Ling, Zan Gojcic, Jiahui Huang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03526v1.pdf) - Keywords: dynamic, gaussian splatting, ar, motion, 3d gaussian + Keywords: dynamic, 3d gaussian, motion, gaussian splatting, ar - **[Dense Scene Reconstruction from Light-Field Images Affected by Rolling Shutter](https://arxiv.org/abs/2412.03518v1)** Authors: Hermes McGriff, Renato Martins, Nicolas Andreff, Cedric Demonceaux Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03518v1.pdf) | [![GitHub](https://img.shields.io/github/stars/ICB-Vision-AI/DenseRSLF?style=social)](https://github.com/ICB-Vision-AI/DenseRSLF) - Keywords: ar, motion, deformation + Keywords: motion, ar, deformation - **[Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction](https://arxiv.org/abs/2412.03473v1)** Authors: Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03473v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, urban scene, face, deformation, semantic + Keywords: dynamic, gaussian splatting, deformation, urban scene, 4d, ar, face, semantic - **[2DGS-Room: Seed-Guided 2D Gaussian Splatting with Geometric Constrains for High-Fidelity Indoor Scene Reconstruction](https://arxiv.org/abs/2412.03428v1)** Authors: Wanting Zhang, Haodong Xiang, Zhichao Liao, Xiansong Lai, Xinghui Li, Long Zeng Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03428v1.pdf) - Keywords: dynamic, gaussian splatting, ar, face, 3d gaussian, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, ar, face ### Few-shot -*Showing the latest 50 out of 70 papers* +*Showing the latest 50 out of 73 papers* - **[SparseLGS: Sparse View Language Embedded Gaussian Splatting](https://arxiv.org/abs/2412.02245v2)** Authors: Jun Hu, Zhang Chen, Zhong Li, Yi Xu, Juyong Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02245v2.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://ustc3dv.github.io/SparseLGS) - Keywords: gaussian splatting, ar, sparse view, understanding, semantic + Keywords: sparse view, gaussian splatting, understanding, ar, semantic - **[How to Use Diffusion Priors under Sparse Views?](https://arxiv.org/abs/2412.02225v1)** Authors: Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02225v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iCVTEAM/IPSM?style=social)](https://github.com/iCVTEAM/IPSM) - Keywords: gaussian splatting, ar, 3d reconstruction, geometry, 3d gaussian, sparse view, sparse-view, semantic + Keywords: sparse view, geometry, 3d gaussian, gaussian splatting, 3d reconstruction, ar, semantic, sparse-view - **[SparseGrasp: Robotic Grasping via 3D Semantic Gaussian Splatting from Sparse Multi-View RGB Images](https://arxiv.org/abs/2412.02140v1)** Authors: Junqiu Yu, Xinlin Ren, Yongchong Gu, Haitao Lin, Tianyu Wang, Yi Zhu, Hang Xu, Yu-Gang Jiang, Xiangyang Xue, Yanwei Fu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02140v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, efficient, sparse-view, semantic + Keywords: 3d gaussian, gaussian splatting, human, efficient, ar, semantic, sparse-view, fast - **[DynSUP: Dynamic Gaussian Splatting from An Unposed Image Pair](https://arxiv.org/abs/2412.00851v1)** Authors: Weihang Li, Weirong Chen, Shenhan Qian, Jiajie Chen, Daniel Cremers, Haoang Li Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00851v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://colin-de.github.io/DynSUP/.) - Keywords: dynamic, gaussian splatting, ar, motion, 3d gaussian, sparse view, high-fidelity + Keywords: sparse view, dynamic, 3d gaussian, motion, gaussian splatting, high-fidelity, ar - **[FlashSLAM: Accelerated RGB-D SLAM for Real-Time 3D Scene Reconstruction with Gaussian Splatting](https://arxiv.org/abs/2412.00682v1)** Authors: Phu Pham, Damon Conover, Aniket Bera Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00682v1.pdf) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, sparse view, tracking, slam + Keywords: sparse view, 3d gaussian, slam, gaussian splatting, tracking, 3d reconstruction, efficient, ar, fast - **[NovelGS: Consistent Novel-view Denoising via Large Gaussian Reconstruction Model](https://arxiv.org/abs/2411.16779v1)** Authors: Jinpeng Liu, Jiale Xu, Weihao Cheng, Yiming Gao, Xintao Wang, Ying Shan, Yansong Tang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.16779v1.pdf) - Keywords: gaussian splatting, ar, fast, 3d gaussian, sparse-view + Keywords: 3d gaussian, gaussian splatting, ar, sparse-view, fast - **[GPS-Gaussian+: Generalizable Pixel-wise 3D Gaussian Splatting for Real-Time Human-Scene Rendering from Sparse Views](https://arxiv.org/abs/2411.11363v1)** Authors: Boyao Zhou, Shunyuan Zheng, Hanzhang Tu, Ruizhi Shao, Boning Liu, Shengping Zhang, Liqiang Nie, Yebin Liu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.11363v1.pdf) - Keywords: human, gaussian splatting, ar, geometry, real-time rendering, 3d gaussian, sparse view, sparse-view + Keywords: sparse view, geometry, 3d gaussian, gaussian splatting, human, ar, real-time rendering, sparse-view - **[SPARS3R: Semantic Prior Alignment and Regularization for Sparse 3D Reconstruction](https://arxiv.org/abs/2411.12592v1)** Authors: Yutao Tang, Yuxiang Guo, Deming Li, Cheng Peng Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.12592v1.pdf) - Keywords: ar, 3d reconstruction, motion, sparse-view, semantic + Keywords: motion, 3d reconstruction, ar, semantic, sparse-view - **[4D Gaussian Splatting in the Wild with Uncertainty-Aware Regularization](https://arxiv.org/abs/2411.08879v1)** Authors: Mijeong Kim, Jongwoo Lim, Bohyung Han Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.08879v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, fast, few-shot, motion + Keywords: dynamic, motion, gaussian splatting, few-shot, 4d, ar, fast - **[SplatFormer: Point Transformer for Robust 3D Gaussian Splatting](https://arxiv.org/abs/2411.06390v2)** Authors: Yutong Chen, Marko Mihajlovic, Xiyi Chen, Yiming Wang, Sergey Prokudin, Siyu Tang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.06390v2.pdf) - Keywords: gaussian splatting, sparse view, ar, 3d gaussian + Keywords: sparse view, 3d gaussian, ar, gaussian splatting ### Geometry Reconstruction -*Showing the latest 50 out of 342 papers* +*Showing the latest 50 out of 364 papers* - **[DGNS: Deformable Gaussian Splatting and Dynamic Neural Surface for Monocular Dynamic 3D Reconstruction](https://arxiv.org/abs/2412.03910v1)** Authors: Xuesong Li, Jinguang Tong, Jie Hong, Vivien Rolland, Lars Petersson Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03910v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d reconstruction, geometry, fast, face + Keywords: geometry, dynamic, gaussian splatting, 3d reconstruction, ar, face, fast - **[Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting](https://arxiv.org/abs/2412.03121v1)** Authors: Yijia Guo, Wenkai Huang, Yang Li, Gaolei Li, Hang Zhang, Liwen Hu, Jianhua Li, Tiejun Huang, Lei Ma Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03121v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://water-gs.github.io.) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, mapping + Keywords: 3d gaussian, gaussian splatting, mapping, 3d reconstruction, efficient, ar, fast - **[RoDyGS: Robust Dynamic Gaussian Splatting for Casual Videos](https://arxiv.org/abs/2412.03077v1)** Authors: Yoonwoo Jeong, Junmyeong Lee, Hoseung Choi, Minsu Cho Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03077v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://rodygs.github.io/.) - Keywords: dynamic, gaussian splatting, ar, geometry, motion, high-fidelity + Keywords: geometry, motion, dynamic, gaussian splatting, high-fidelity, ar - **[AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction](https://arxiv.org/abs/2412.02684v1)** Authors: Lingteng Qiu, Shenhao Zhu, Qi Zuo, Xiaodong Gu, Yuan Dong, Junfei Zhang, Chao Xu, Zhe Li, Weihao Yuan, Liefeng Bo, Guanying Chen, Zilong Dong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02684v1.pdf) - Keywords: animation, gaussian splatting, 4d, human, 3d reconstruction, real-time rendering, ar, avatar, efficient + Keywords: animation, gaussian splatting, 3d reconstruction, human, avatar, 4d, efficient, ar, real-time rendering - **[Realistic Surgical Simulation from Monocular Videos](https://arxiv.org/abs/2412.02359v1)** Authors: Kailing Wang, Chen Yang, Keyang Zhao, Xiaokang Yang, Wei Shen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02359v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://namaenashibot.github.io/SurgiSim/.) - Keywords: dynamic, ar, geometry, motion, 3d gaussian, deformation, high-fidelity + Keywords: geometry, 3d gaussian, motion, dynamic, high-fidelity, deformation, ar - **[GSGTrack: Gaussian Splatting-Guided Object Pose Tracking from RGB Videos](https://arxiv.org/abs/2412.02267v1)** Authors: Zhiyuan Chen, Fan Lu, Guo Yu, Bin Li, Sanqing Qu, Yuan Huang, Changhong Fu, Guang Chen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02267v1.pdf) - Keywords: gaussian splatting, ar, geometry, 3d gaussian, tracking + Keywords: geometry, 3d gaussian, gaussian splatting, tracking, ar - **[Multi-robot autonomous 3D reconstruction using Gaussian splatting with Semantic guidance](https://arxiv.org/abs/2412.02249v1)** Authors: Jing Zeng, Qi Ye, Tianle Liu, Yang Xu, Jin Li, Jinming Xu, Liang Li, Jiming Chen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02249v1.pdf) - Keywords: gaussian splatting, segmentation, ar, 3d reconstruction, high quality, face, 3d gaussian, semantic + Keywords: 3d gaussian, gaussian splatting, segmentation, 3d reconstruction, ar, face, semantic, high quality - **[How to Use Diffusion Priors under Sparse Views?](https://arxiv.org/abs/2412.02225v1)** Authors: Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02225v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iCVTEAM/IPSM?style=social)](https://github.com/iCVTEAM/IPSM) - Keywords: gaussian splatting, ar, 3d reconstruction, geometry, 3d gaussian, sparse view, sparse-view, semantic + Keywords: sparse view, geometry, 3d gaussian, gaussian splatting, 3d reconstruction, ar, semantic, sparse-view - **[Gaussian Object Carver: Object-Compositional Gaussian Splatting with surfaces completion](https://arxiv.org/abs/2412.02075v1)** Authors: Liu Liu, Xinjie Wang, Jiaxiong Qiu, Tianwei Lin, Xiaolin Zhou, Zhizhong Su Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02075v1.pdf) - Keywords: gaussian splatting, ar, geometry, face, 3d gaussian, efficient, vr + Keywords: geometry, 3d gaussian, gaussian splatting, vr, efficient, ar, face - **[Planar Gaussian Splatting](https://arxiv.org/abs/2412.01931v1)** Authors: Farhad G. Zanjani, Hong Cai, Hanno Ackermann, Leila Mirvakhabova, Fatih Porikli Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01931v1.pdf) - Keywords: gaussian splatting, segmentation, ar, geometry, fast, face, neural rendering + Keywords: geometry, gaussian splatting, segmentation, ar, face, neural rendering, fast ### Large Scene -*Showing the latest 50 out of 57 papers* +*Showing the latest 50 out of 59 papers* - **[HybridGS: Decoupling Transients and Statics with 2D and 3D Gaussian Splatting](https://arxiv.org/abs/2412.03844v1)** Authors: Jingyu Lin, Jiaqi Gu, Lubin Fan, Bojian Wu, Yujing Lou, Renjie Chen, Ligang Liu, Jieping Ye Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03844v1.pdf) - Keywords: gaussian splatting, ar, outdoor, 3d gaussian + Keywords: 3d gaussian, outdoor, gaussian splatting, ar - **[Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction](https://arxiv.org/abs/2412.03473v1)** Authors: Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03473v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, urban scene, face, deformation, semantic + Keywords: dynamic, gaussian splatting, deformation, urban scene, 4d, ar, face, semantic - **[NeRF and Gaussian Splatting SLAM in the Wild](https://arxiv.org/abs/2412.03263v1)** Authors: Fabian Schmidt, Markus Enzweiler, Abhinav Valada Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03263v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iis-esslingen/nerf-3dgs-benchmark?style=social)](https://github.com/iis-esslingen/nerf-3dgs-benchmark) - Keywords: dynamic, gaussian splatting, ar, nerf, localization, tracking, mapping, understanding, slam, outdoor + Keywords: dynamic, outdoor, slam, lighting, gaussian splatting, tracking, mapping, nerf, understanding, ar, localization - **[Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes](https://arxiv.org/abs/2412.01745v1)** Authors: Lihan Jiang, Kerui Ren, Mulin Yu, Linning Xu, Junting Dong, Tao Lu, Feng Zhao, Dahua Lin, Bo Dai Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01745v1.pdf) - Keywords: gaussian splatting, ar, urban scene, 3d gaussian, high-fidelity + Keywords: 3d gaussian, gaussian splatting, high-fidelity, urban scene, ar - **[Tortho-Gaussian: Splatting True Digital Orthophoto Maps](https://arxiv.org/abs/2411.19594v1)** Authors: Xin Wang, Wendi Zhang, Hong Xie, Haibin Ai, Qiangqiang Yuan, Zongqian Zhan Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.19594v1.pdf) - Keywords: gaussian splatting, ar, urban scene, face, 3d gaussian + Keywords: 3d gaussian, gaussian splatting, urban scene, ar, face - **[UrbanCAD: Towards Highly Controllable and Photorealistic 3D Vehicles for Urban Scene Simulation](https://arxiv.org/abs/2411.19292v1)** Authors: Yichong Lu, Yichi Cai, Shangzhan Zhang, Hongyu Zhou, Haoji Hu, Huimin Yu, Andreas Geiger, Yiyi Liao Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.19292v1.pdf) - Keywords: urban scene, ar, high-fidelity, autonomous driving + Keywords: lighting, high-fidelity, urban scene, relighting, ar, autonomous driving - **[Unleashing the Power of Data Synthesis in Visual Localization](https://arxiv.org/abs/2412.00138v1)** Authors: Sihang Li, Siqi Tan, Bowen Chang, Jing Zhang, Chen Feng, Yiming Li Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00138v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://ai4ce.github.io/RAP/) - Keywords: dynamic, ar, fast, localization, 3d gaussian, robotics, outdoor + Keywords: dynamic, 3d gaussian, outdoor, robotics, ar, fast, localization - **[UniGaussian: Driving Scene Reconstruction from Multiple Camera Models via Unified Gaussian Representations](https://arxiv.org/abs/2411.15355v1)** Authors: Yuan Ren, Guile Wu, Runhao Li, Zheyuan Yang, Yibo Liu, Xingxin Chen, Tongtong Cao, Bingbing Liu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.15355v1.pdf) - Keywords: gaussian splatting, ar, real-time rendering, fast, urban scene, 3d gaussian, autonomous driving, understanding, semantic + Keywords: 3d gaussian, gaussian splatting, urban scene, understanding, ar, real-time rendering, semantic, autonomous driving, fast - **[LiV-GS: LiDAR-Vision Integration for 3D Gaussian Splatting SLAM in Outdoor Environments](https://arxiv.org/abs/2411.12185v1)** Authors: Renxiang Xiao, Wei Liu, Yushuai Chen, Liang Hu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.12185v1.pdf) - Keywords: gaussian splatting, segmentation, ar, fast, mapping, localization, 3d gaussian, tracking, outdoor, slam, semantic + Keywords: 3d gaussian, outdoor, slam, gaussian splatting, tracking, mapping, segmentation, ar, semantic, fast, localization - **[BillBoard Splatting (BBSplat): Learnable Textured Primitives for Novel View Synthesis](https://arxiv.org/abs/2411.08508v2)** Authors: David Svitov, Pietro Morerio, Lourdes Agapito, Alessio Del Bue Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.08508v2.pdf) - Keywords: gaussian splatting, compression, ar, nerf, efficient, outdoor + Keywords: outdoor, gaussian splatting, nerf, efficient, ar, compression ### Model Compression -*Showing the latest 50 out of 356 papers* +*Showing the latest 50 out of 390 papers* - **[Turbo3D: Ultra-fast Text-to-3D Generation](https://arxiv.org/abs/2412.04470v1)** Authors: Hanzhe Hu, Tianwei Yin, Fujun Luan, Yiwei Hu, Hao Tan, Zexiang Xu, Sai Bi, Shubham Tulsiani, Kai Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04470v1.pdf) - Keywords: gaussian splatting, ar, fast, efficient + Keywords: fast, efficient, gaussian splatting, ar - **[QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos](https://arxiv.org/abs/2412.04469v1)** Authors: Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04469v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://research.nvidia.com/labs/amri/projects/queen) - Keywords: dynamic, gaussian splatting, ar, high quality, fast, 3d gaussian, efficient + Keywords: dynamic, 3d gaussian, gaussian splatting, efficient, ar, fast, high quality - **[Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering](https://arxiv.org/abs/2412.04459v1)** Authors: Cheng Sun, Jaesung Choe, Charles Loop, Wei-Chiu Ma, Yu-Chiang Frank Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04459v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, 3d gaussian, efficient, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, 4d, efficient, ar - **[EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding](https://arxiv.org/abs/2412.04380v1)** Authors: Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, Jiwen Lu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04380v1.pdf) | [![GitHub](https://img.shields.io/github/stars/YkiWu/EmbodiedOcc?style=social)](https://github.com/YkiWu/EmbodiedOcc) - Keywords: human, ar, 3d gaussian, efficient, understanding, semantic + Keywords: 3d gaussian, human, understanding, efficient, ar, semantic - **[Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting](https://arxiv.org/abs/2412.03121v1)** Authors: Yijia Guo, Wenkai Huang, Yang Li, Gaolei Li, Hang Zhang, Liwen Hu, Jianhua Li, Tiejun Huang, Lei Ma Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03121v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://water-gs.github.io.) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, mapping + Keywords: 3d gaussian, gaussian splatting, mapping, 3d reconstruction, efficient, ar, fast - **[AniGS: Animatable Gaussian Avatar from a Single Image with Inconsistent Gaussian Reconstruction](https://arxiv.org/abs/2412.02684v1)** Authors: Lingteng Qiu, Shenhao Zhu, Qi Zuo, Xiaodong Gu, Yuan Dong, Junfei Zhang, Chao Xu, Zhe Li, Weihao Yuan, Liefeng Bo, Guanying Chen, Zilong Dong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02684v1.pdf) - Keywords: animation, gaussian splatting, 4d, human, 3d reconstruction, real-time rendering, ar, avatar, efficient + Keywords: animation, gaussian splatting, 3d reconstruction, human, avatar, 4d, efficient, ar, real-time rendering - **[RelayGS: Reconstructing Dynamic Scenes with Large-Scale and Complex Motions via Relay Gaussians](https://arxiv.org/abs/2412.02493v1)** Authors: Qiankun Gao, Yanmin Wu, Chengxiang Wen, Jiarui Meng, Luyang Tang, Jie Chen, Ronggang Wang, Jian Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02493v1.pdf) | [![GitHub](https://img.shields.io/github/stars/gqk/RelayGS?style=social)](https://github.com/gqk/RelayGS) - Keywords: dynamic, gaussian splatting, 4d, ar, motion, 3d gaussian, compact + Keywords: dynamic, 3d gaussian, motion, gaussian splatting, 4d, ar, compact - **[TimeWalker: Personalized Neural Space for Lifelong Head Avatars](https://arxiv.org/abs/2412.02421v1)** Authors: Dongwei Pan, Yang Li, Hongsheng Li, Kwan-Yee Lin Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02421v1.pdf) - Keywords: dynamic, gaussian splatting, animation, human, ar, avatar, motion, deformation, head, compact + Keywords: dynamic, motion, head, animation, gaussian splatting, deformation, human, avatar, ar, compact - **[SparseGrasp: Robotic Grasping via 3D Semantic Gaussian Splatting from Sparse Multi-View RGB Images](https://arxiv.org/abs/2412.02140v1)** Authors: Junqiu Yu, Xinlin Ren, Yongchong Gu, Haitao Lin, Tianyu Wang, Yi Zhu, Hang Xu, Yu-Gang Jiang, Xiangyang Xue, Yanwei Fu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02140v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, efficient, sparse-view, semantic + Keywords: 3d gaussian, gaussian splatting, human, efficient, ar, semantic, sparse-view, fast - **[Gaussian Object Carver: Object-Compositional Gaussian Splatting with surfaces completion](https://arxiv.org/abs/2412.02075v1)** Authors: Liu Liu, Xinjie Wang, Jiaxiong Qiu, Tianwei Lin, Xiaolin Zhou, Zhizhong Su Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02075v1.pdf) - Keywords: gaussian splatting, ar, geometry, face, 3d gaussian, efficient, vr + Keywords: geometry, 3d gaussian, gaussian splatting, vr, efficient, ar, face ### Quality Enhancement -*Showing the latest 50 out of 173 papers* +*Showing the latest 50 out of 180 papers* - **[QUEEN: QUantized Efficient ENcoding of Dynamic Gaussians for Streaming Free-viewpoint Videos](https://arxiv.org/abs/2412.04469v1)** Authors: Sharath Girish, Tianye Li, Amrita Mazumdar, Abhinav Shrivastava, David Luebke, Shalini De Mello Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04469v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://research.nvidia.com/labs/amri/projects/queen) - Keywords: dynamic, gaussian splatting, ar, high quality, fast, 3d gaussian, efficient + Keywords: dynamic, 3d gaussian, gaussian splatting, efficient, ar, fast, high quality - **[Sparse Voxels Rasterization: Real-time High-fidelity Radiance Field Rendering](https://arxiv.org/abs/2412.04459v1)** Authors: Cheng Sun, Jaesung Choe, Charles Loop, Wei-Chiu Ma, Yu-Chiang Frank Wang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04459v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, 3d gaussian, efficient, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, 4d, efficient, ar - **[2DGS-Room: Seed-Guided 2D Gaussian Splatting with Geometric Constrains for High-Fidelity Indoor Scene Reconstruction](https://arxiv.org/abs/2412.03428v1)** Authors: Wanting Zhang, Haodong Xiang, Zhichao Liao, Xiansong Lai, Xinghui Li, Long Zeng Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03428v1.pdf) - Keywords: dynamic, gaussian splatting, ar, face, 3d gaussian, high-fidelity + Keywords: dynamic, 3d gaussian, gaussian splatting, high-fidelity, ar, face - **[RoDyGS: Robust Dynamic Gaussian Splatting for Casual Videos](https://arxiv.org/abs/2412.03077v1)** Authors: Yoonwoo Jeong, Junmyeong Lee, Hoseung Choi, Minsu Cho Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03077v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://rodygs.github.io/.) - Keywords: dynamic, gaussian splatting, ar, geometry, motion, high-fidelity + Keywords: geometry, motion, dynamic, gaussian splatting, high-fidelity, ar - **[Realistic Surgical Simulation from Monocular Videos](https://arxiv.org/abs/2412.02359v1)** Authors: Kailing Wang, Chen Yang, Keyang Zhao, Xiaokang Yang, Wei Shen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02359v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://namaenashibot.github.io/SurgiSim/.) - Keywords: dynamic, ar, geometry, motion, 3d gaussian, deformation, high-fidelity + Keywords: geometry, 3d gaussian, motion, dynamic, high-fidelity, deformation, ar - **[Multi-robot autonomous 3D reconstruction using Gaussian splatting with Semantic guidance](https://arxiv.org/abs/2412.02249v1)** Authors: Jing Zeng, Qi Ye, Tianle Liu, Yang Xu, Jin Li, Jinming Xu, Liang Li, Jiming Chen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02249v1.pdf) - Keywords: gaussian splatting, segmentation, ar, 3d reconstruction, high quality, face, 3d gaussian, semantic + Keywords: 3d gaussian, gaussian splatting, segmentation, 3d reconstruction, ar, face, semantic, high quality - **[HDGS: Textured 2D Gaussian Splatting for Enhanced Scene Rendering](https://arxiv.org/abs/2412.01823v1)** Authors: Yunzhou Song, Heguang Lin, Jiahui Lei, Lingjie Liu, Kostas Daniilidis Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01823v1.pdf) - Keywords: gaussian splatting, ar, geometry, face, neural rendering, high-fidelity + Keywords: geometry, gaussian splatting, high-fidelity, ar, face, neural rendering - **[Horizon-GS: Unified 3D Gaussian Splatting for Large-Scale Aerial-to-Ground Scenes](https://arxiv.org/abs/2412.01745v1)** Authors: Lihan Jiang, Kerui Ren, Mulin Yu, Linning Xu, Junting Dong, Tao Lu, Feng Zhao, Dahua Lin, Bo Dai Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01745v1.pdf) - Keywords: gaussian splatting, ar, urban scene, 3d gaussian, high-fidelity + Keywords: 3d gaussian, gaussian splatting, high-fidelity, urban scene, ar - **[Driving Scene Synthesis on Free-form Trajectories with Generative Prior](https://arxiv.org/abs/2412.01717v1)** Authors: Zeyu Yang, Zijie Pan, Yuankun Yang, Xiatian Zhu, Li Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01717v1.pdf) - Keywords: gaussian splatting, face, ar, high-fidelity + Keywords: ar, gaussian splatting, face, high-fidelity - **[Diffusion Models with Anisotropic Gaussian Splatting for Image Inpainting](https://arxiv.org/abs/2412.01682v2)** Authors: Jacob Fein-Ashley, Benjamin Fein-Ashley Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01682v2.pdf) - Keywords: gaussian splatting, ar, high-fidelity + Keywords: ar, gaussian splatting, high-fidelity + +### Ray Tracing + +- **[RF-3DGS: Wireless Channel Modeling with Radio Radiance Field and 3D Gaussian Splatting](https://arxiv.org/abs/2411.19420v1)** + Authors: Lihao Zhang, Haijian Sun, Samuel Berweger, Camillo Gentile, Rose Qingyang Hu + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.19420v1.pdf) + Keywords: ray tracing, 3d gaussian, ar, gaussian splatting +- **[URAvatar: Universal Relightable Gaussian Codec Avatars](https://arxiv.org/abs/2410.24223v1)** + Authors: Junxuan Li, Chen Cao, Gabriel Schwartz, Rawal Khirodkar, Christian Richardt, Tomas Simon, Yaser Sheikh, Shunsuke Saito + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.24223v1.pdf) + Keywords: head, 3d gaussian, relightable, human, avatar, efficient, light transport, illumination, real-time rendering, ar, global illumination +- **[Multi-Layer Gaussian Splatting for Immersive Anatomy Visualization](https://arxiv.org/abs/2410.16978v1)** + Authors: Constantin Kleinbeck, Hannah Schieber, Klaus Engel, Ralf Gutjahr, Daniel Roth + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.16978v1.pdf) + Keywords: head, path tracing, medical, gaussian splatting, vr, understanding, efficient, ar +- **[GS^3: Efficient Relighting with Triple Gaussian Splatting](https://arxiv.org/abs/2410.11419v1)** + Authors: Zoubin Bi, Yixin Zeng, Chong Zeng, Fan Pei, Xiang Feng, Kun Zhou, Hongzhi Wu + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.11419v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://GSrelight.github.io/.) + Keywords: geometry, lighting, gaussian splatting, relighting, efficient, shadow, illumination, ar, global illumination +- **[RGM: Reconstructing High-fidelity 3D Car Assets with Relightable 3D-GS Generative Model from a Single Image](https://arxiv.org/abs/2410.08181v1)** + Authors: Xiaoxue Chen, Jv Zheng, Hao Huang, Haoran Xu, Weihao Gu, Kangliang Chen, He xiang, Huan-ang Gao, Hao Zhao, Guyue Zhou, Yaqin Zhang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.08181v1.pdf) + Keywords: geometry, 3d gaussian, relightable, lighting, high-fidelity, nerf, relighting, ar, illumination, autonomous driving, global illumination +- **[6DGS: Enhanced Direction-Aware Gaussian Splatting for Volumetric Rendering](https://arxiv.org/abs/2410.04974v2)** + Authors: Zhongpai Gao, Benjamin Planche, Meng Zheng, Anwesa Choudhuri, Terrence Chen, Ziyan Wu + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.04974v2.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://gaozhongpai.github.io/6dgs/) + Keywords: 3d gaussian, gaussian splatting, nerf, ray tracing, ar, real-time rendering, high quality +- **[GI-GS: Global Illumination Decomposition on Gaussian Splatting for Inverse Rendering](https://arxiv.org/abs/2410.02619v1)** + Authors: Hongze Chen, Zehong Lin, Jun Zhang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.02619v1.pdf) + Keywords: path tracing, 3d gaussian, geometry, gaussian splatting, lighting, high-fidelity, relighting, efficient, shadow, illumination, lightweight, ar, global illumination +- **[EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis](https://arxiv.org/abs/2410.01804v5)** + Authors: Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T. Barron, Yinda Zhang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2410.01804v5.pdf) + Keywords: 3d gaussian, gaussian splatting, nerf, ray tracing, ar +- **[SpikeGS: Learning 3D Gaussian Fields from Continuous Spike Stream](https://arxiv.org/abs/2409.15176v5)** + Authors: Jinze Yu, Xin Peng, Zhengda Lu, Laurent Kneip, Yiqun Wang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2409.15176v5.pdf) | [![GitHub](https://img.shields.io/github/stars/520jz/SpikeGS?style=social)](https://github.com/520jz/SpikeGS) + Keywords: dynamic, 3d gaussian, lighting, ar, illumination, real-time rendering, ray marching +- **[CrossRT: A cross platform programming technology for hardware-accelerated ray tracing in CG and CV applications](https://arxiv.org/abs/2409.12617v1)** + Authors: Vladimir Frolov, Vadim Sanzharov, Garifullin Albert, Maxim Raenchuk, Alexei Voloboy + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2409.12617v1.pdf) + Keywords: path tracing, 3d gaussian, gaussian splatting, nerf, ray tracing, acceleration, efficient, ar, face + +### Relighting + +*Showing the latest 50 out of 121 papers* + +- **[Multi-View Pose-Agnostic Change Localization with Zero Labels](https://arxiv.org/abs/2412.03911v1)** + Authors: Chamuditha Jayanga Galappaththige, Jason Lai, Lloyd Windrim, Donald Dansereau, Niko Suenderhauf, Dimity Miller + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03911v1.pdf) + Keywords: 3d gaussian, lighting, gaussian splatting, ar, localization +- **[NeRF and Gaussian Splatting SLAM in the Wild](https://arxiv.org/abs/2412.03263v1)** + Authors: Fabian Schmidt, Markus Enzweiler, Abhinav Valada + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03263v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iis-esslingen/nerf-3dgs-benchmark?style=social)](https://github.com/iis-esslingen/nerf-3dgs-benchmark) + Keywords: dynamic, outdoor, slam, lighting, gaussian splatting, tracking, mapping, nerf, understanding, ar, localization +- **[HUGSIM: A Real-Time, Photo-Realistic and Closed-Loop Simulator for Autonomous Driving](https://arxiv.org/abs/2412.01718v1)** + Authors: Hongyu Zhou, Longzhong Lin, Jiabao Wang, Yichong Lu, Dongfeng Bai, Bingbing Liu, Yue Wang, Andreas Geiger, Yiyi Liao + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01718v1.pdf) + Keywords: dynamic, 3d gaussian, lighting, gaussian splatting, ar, autonomous driving +- **[Ref-GS: Directional Factorization for 2D Gaussian Splatting](https://arxiv.org/abs/2412.00905v1)** + Authors: Youjia Zhang, Anpei Chen, Yumin Wan, Zikai Song, Junqing Yu, Yawei Luo, Wei Yang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00905v1.pdf) + Keywords: geometry, head, lighting, gaussian splatting, efficient, ar, face +- **[A Lesson in Splats: Teacher-Guided Diffusion for 3D Gaussian Splats Generation with 2D Supervision](https://arxiv.org/abs/2412.00623v1)** + Authors: Chensheng Peng, Ido Sobol, Masayoshi Tomizuka, Kurt Keutzer, Chenfeng Xu, Or Litany + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00623v1.pdf) + Keywords: 3d gaussian, ar, lighting +- **[UrbanCAD: Towards Highly Controllable and Photorealistic 3D Vehicles for Urban Scene Simulation](https://arxiv.org/abs/2411.19292v1)** + Authors: Yichong Lu, Yichi Cai, Shangzhan Zhang, Hongyu Zhou, Haoji Hu, Huimin Yu, Andreas Geiger, Yiyi Liao + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.19292v1.pdf) + Keywords: lighting, high-fidelity, urban scene, relighting, ar, autonomous driving +- **[InstanceGaussian: Appearance-Semantic Joint Gaussian Representation for 3D Instance-Level Perception](https://arxiv.org/abs/2411.19235v1)** + Authors: Haijie Li, Yanmin Wu, Jiarui Meng, Qiankun Gao, Zhiyao Zhang, Ronggang Wang, Jian Zhang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.19235v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://lhj-git.github.io/InstanceGaussian/) + Keywords: 3d gaussian, lighting, gaussian splatting, segmentation, understanding, robotics, efficient, ar, semantic, autonomous driving +- **[SuperGaussians: Enhancing Gaussian Splatting Using Primitives with Spatially Varying Colors](https://arxiv.org/abs/2411.18966v1)** + Authors: Rui Xu, Wenyue Chen, Jiepeng Wang, Yuan Liu, Peng Wang, Lin Gao, Shiqing Xin, Taku Komura, Xin Li, Wenping Wang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.18966v1.pdf) + Keywords: geometry, lighting, gaussian splatting, ar, compact +- **[NexusSplats: Efficient 3D Gaussian Splatting in the Wild](https://arxiv.org/abs/2411.14514v4)** + Authors: Yuzhou Tang, Dejun Xu, Yongjie Hou, Zhenzhong Wang, Min Jiang + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.14514v4.pdf) + Keywords: 3d gaussian, lighting, gaussian splatting, mapping, efficient, ar +- **[PR-ENDO: Physically Based Relightable Gaussian Splatting for Endoscopy](https://arxiv.org/abs/2411.12510v1)** + Authors: Joanna Kaleta, Weronika Smolak-Dyżewska, Dawid Malarz, Diego Dall'Alba, Przemysław Korzeniowski, Przemysław Spurek + Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2411.12510v1.pdf) + Keywords: 3d gaussian, relightable, gaussian splatting, lighting, relighting, ar, illumination ### SLAM -*Showing the latest 50 out of 150 papers* +*Showing the latest 50 out of 158 papers* - **[Multi-View Pose-Agnostic Change Localization with Zero Labels](https://arxiv.org/abs/2412.03911v1)** Authors: Chamuditha Jayanga Galappaththige, Jason Lai, Lloyd Windrim, Donald Dansereau, Niko Suenderhauf, Dimity Miller Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03911v1.pdf) - Keywords: gaussian splatting, ar, localization, 3d gaussian + Keywords: 3d gaussian, lighting, gaussian splatting, ar, localization - **[NeRF and Gaussian Splatting SLAM in the Wild](https://arxiv.org/abs/2412.03263v1)** Authors: Fabian Schmidt, Markus Enzweiler, Abhinav Valada Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03263v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iis-esslingen/nerf-3dgs-benchmark?style=social)](https://github.com/iis-esslingen/nerf-3dgs-benchmark) - Keywords: dynamic, gaussian splatting, ar, nerf, localization, tracking, mapping, understanding, slam, outdoor + Keywords: dynamic, outdoor, slam, lighting, gaussian splatting, tracking, mapping, nerf, understanding, ar, localization - **[Splats in Splats: Embedding Invisible 3D Watermark within Gaussian Splatting](https://arxiv.org/abs/2412.03121v1)** Authors: Yijia Guo, Wenkai Huang, Yang Li, Gaolei Li, Hang Zhang, Liwen Hu, Jianhua Li, Tiejun Huang, Lei Ma Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03121v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://water-gs.github.io.) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, mapping + Keywords: 3d gaussian, gaussian splatting, mapping, 3d reconstruction, efficient, ar, fast - **[Towards Rich Emotions in 3D Avatars: A Text-to-3D Avatar Generation Benchmark](https://arxiv.org/abs/2412.02508v1)** Authors: Haidong Xu, Meishan Zhang, Hao Ju, Zhedong Zheng, Hongyuan Zhu, Erik Cambria, Min Zhang, Hao Fei Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02508v1.pdf) - Keywords: dynamic, human, ar, avatar, motion, 3d gaussian, mapping + Keywords: dynamic, 3d gaussian, motion, mapping, human, avatar, ar - **[GSGTrack: Gaussian Splatting-Guided Object Pose Tracking from RGB Videos](https://arxiv.org/abs/2412.02267v1)** Authors: Zhiyuan Chen, Fan Lu, Guo Yu, Bin Li, Sanqing Qu, Yuan Huang, Changhong Fu, Guang Chen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02267v1.pdf) - Keywords: gaussian splatting, ar, geometry, 3d gaussian, tracking + Keywords: geometry, 3d gaussian, gaussian splatting, tracking, ar - **[CTRL-D: Controllable Dynamic 3D Scene Editing with Personalized 2D Diffusion](https://arxiv.org/abs/2412.01792v1)** Authors: Kai He, Chin-Hsuan Wu, Igor Gilitschenski Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01792v1.pdf) - Keywords: dynamic, gaussian splatting, ar, 3d gaussian, tracking + Keywords: dynamic, 3d gaussian, gaussian splatting, tracking, ar - **[6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting](https://arxiv.org/abs/2412.01543v1)** Authors: Yufeng Jin, Vignesh Prasad, Snehal Jauhri, Mathias Franzius, Georgia Chalvatzaki Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01543v1.pdf) - Keywords: dynamic, gaussian splatting, ar, fast, efficient, autonomous driving, tracking, robotics + Keywords: dynamic, gaussian splatting, tracking, robotics, efficient, ar, autonomous driving, fast - **[RGBDS-SLAM: A RGB-D Semantic Dense SLAM Based on 3D Multi Level Pyramid Gaussian Splatting](https://arxiv.org/abs/2412.01217v2)** Authors: Zhenzhong Cao, Chenyang Zhao, Qianyi Zhang, Jinzheng Guang, Yinuo Song Jingtai Liu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01217v2.pdf) | [![GitHub](https://img.shields.io/github/stars/zhenzhongcao/RGBDS-SLAM?style=social)](https://github.com/zhenzhongcao/RGBDS-SLAM) - Keywords: gaussian splatting, ar, 3d gaussian, slam, semantic + Keywords: 3d gaussian, slam, gaussian splatting, ar, semantic - **[FlashSLAM: Accelerated RGB-D SLAM for Real-Time 3D Scene Reconstruction with Gaussian Splatting](https://arxiv.org/abs/2412.00682v1)** Authors: Phu Pham, Damon Conover, Aniket Bera Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00682v1.pdf) - Keywords: gaussian splatting, ar, 3d reconstruction, fast, 3d gaussian, efficient, sparse view, tracking, slam + Keywords: sparse view, 3d gaussian, slam, gaussian splatting, tracking, 3d reconstruction, efficient, ar, fast - **[LineGS : 3D Line Segment Representation on 3D Gaussian Splatting](https://arxiv.org/abs/2412.00477v1)** Authors: Chenggang Yang, Yuang Shi, Wei Tsang Ooi Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.00477v1.pdf) - Keywords: gaussian splatting, ar, 3d reconstruction, geometry, face, localization, 3d gaussian, efficient, mapping + Keywords: geometry, 3d gaussian, gaussian splatting, mapping, 3d reconstruction, efficient, ar, face, localization ### Scene Understanding -*Showing the latest 50 out of 175 papers* +*Showing the latest 50 out of 185 papers* - **[EmbodiedOcc: Embodied 3D Occupancy Prediction for Vision-based Online Scene Understanding](https://arxiv.org/abs/2412.04380v1)** Authors: Yuqi Wu, Wenzhao Zheng, Sicheng Zuo, Yuanhui Huang, Jie Zhou, Jiwen Lu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.04380v1.pdf) | [![GitHub](https://img.shields.io/github/stars/YkiWu/EmbodiedOcc?style=social)](https://github.com/YkiWu/EmbodiedOcc) - Keywords: human, ar, 3d gaussian, efficient, understanding, semantic + Keywords: 3d gaussian, human, understanding, efficient, ar, semantic - **[Urban4D: Semantic-Guided 4D Gaussian Splatting for Urban Scene Reconstruction](https://arxiv.org/abs/2412.03473v1)** Authors: Ziwen Li, Jiaxin Huang, Runnan Chen, Yunlong Che, Yandong Guo, Tongliang Liu, Fakhri Karray, Mingming Gong Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03473v1.pdf) - Keywords: dynamic, gaussian splatting, 4d, ar, urban scene, face, deformation, semantic + Keywords: dynamic, gaussian splatting, deformation, urban scene, 4d, ar, face, semantic - **[NeRF and Gaussian Splatting SLAM in the Wild](https://arxiv.org/abs/2412.03263v1)** Authors: Fabian Schmidt, Markus Enzweiler, Abhinav Valada Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.03263v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iis-esslingen/nerf-3dgs-benchmark?style=social)](https://github.com/iis-esslingen/nerf-3dgs-benchmark) - Keywords: dynamic, gaussian splatting, ar, nerf, localization, tracking, mapping, understanding, slam, outdoor + Keywords: dynamic, outdoor, slam, lighting, gaussian splatting, tracking, mapping, nerf, understanding, ar, localization - **[Multi-robot autonomous 3D reconstruction using Gaussian splatting with Semantic guidance](https://arxiv.org/abs/2412.02249v1)** Authors: Jing Zeng, Qi Ye, Tianle Liu, Yang Xu, Jin Li, Jinming Xu, Liang Li, Jiming Chen Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02249v1.pdf) - Keywords: gaussian splatting, segmentation, ar, 3d reconstruction, high quality, face, 3d gaussian, semantic + Keywords: 3d gaussian, gaussian splatting, segmentation, 3d reconstruction, ar, face, semantic, high quality - **[SparseLGS: Sparse View Language Embedded Gaussian Splatting](https://arxiv.org/abs/2412.02245v2)** Authors: Jun Hu, Zhang Chen, Zhong Li, Yi Xu, Juyong Zhang Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02245v2.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://ustc3dv.github.io/SparseLGS) - Keywords: gaussian splatting, ar, sparse view, understanding, semantic + Keywords: sparse view, gaussian splatting, understanding, ar, semantic - **[How to Use Diffusion Priors under Sparse Views?](https://arxiv.org/abs/2412.02225v1)** Authors: Qisen Wang, Yifan Zhao, Jiawei Ma, Jia Li Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02225v1.pdf) | [![GitHub](https://img.shields.io/github/stars/iCVTEAM/IPSM?style=social)](https://github.com/iCVTEAM/IPSM) - Keywords: gaussian splatting, ar, 3d reconstruction, geometry, 3d gaussian, sparse view, sparse-view, semantic + Keywords: sparse view, geometry, 3d gaussian, gaussian splatting, 3d reconstruction, ar, semantic, sparse-view - **[SparseGrasp: Robotic Grasping via 3D Semantic Gaussian Splatting from Sparse Multi-View RGB Images](https://arxiv.org/abs/2412.02140v1)** Authors: Junqiu Yu, Xinlin Ren, Yongchong Gu, Haitao Lin, Tianyu Wang, Yi Zhu, Hang Xu, Yu-Gang Jiang, Xiangyang Xue, Yanwei Fu Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.02140v1.pdf) - Keywords: human, gaussian splatting, ar, fast, 3d gaussian, efficient, sparse-view, semantic + Keywords: 3d gaussian, gaussian splatting, human, efficient, ar, semantic, sparse-view, fast - **[Planar Gaussian Splatting](https://arxiv.org/abs/2412.01931v1)** Authors: Farhad G. Zanjani, Hong Cai, Hanno Ackermann, Leila Mirvakhabova, Fatih Porikli Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01931v1.pdf) - Keywords: gaussian splatting, segmentation, ar, geometry, fast, face, neural rendering + Keywords: geometry, gaussian splatting, segmentation, ar, face, neural rendering, fast - **[Occam's LGS: A Simple Approach for Language Gaussian Splatting](https://arxiv.org/abs/2412.01807v1)** Authors: Jiahuan Cheng, Jan-Nico Zaech, Luc Van Gool, Danda Pani Paudel Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01807v1.pdf) | [![Project](https://img.shields.io/badge/-Project-blue)](https://insait-institute.github.io/OccamLGS/) - Keywords: gaussian splatting, compression, ar, 3d reconstruction, 3d gaussian, efficient, understanding, semantic + Keywords: 3d gaussian, gaussian splatting, 3d reconstruction, understanding, efficient, ar, semantic, compression - **[3DSceneEditor: Controllable 3D Scene Editing with Gaussian Splatting](https://arxiv.org/abs/2412.01583v1)** Authors: Ziyang Yan, Lei Li, Yihua Shao, Siyu Chen, Wuzong Kai, Jenq-Neng Hwang, Hao Zhao, Fabio Remondino Links: [![PDF](https://img.shields.io/badge/PDF-arXiv-b31b1b.svg)](https://arxiv.org/pdf/2412.01583v1.pdf) - Keywords: gaussian splatting, segmentation, ar, efficient, semantic + Keywords: gaussian splatting, segmentation, efficient, ar, semantic diff --git a/data/papers_2024-12-08.json b/data/papers_2024-12-08.json index 2fa743e..0039dd8 100644 --- a/data/papers_2024-12-08.json +++ b/data/papers_2024-12-08.json @@ -21,10 +21,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "fast", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -50,12 +50,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "high quality", "fast", - "3d gaussian", - "efficient" + "high quality" ], "citations": 0, "semantic_url": "" @@ -80,12 +80,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "high-fidelity", "4d", - "ar", - "3d gaussian", "efficient", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -111,10 +111,10 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", "ar", - "fast", - "motion" + "fast" ], "citations": 0, "semantic_url": "" @@ -136,13 +136,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "deformation", "human", - "ar", "avatar", - "motion", - "deformation", - "3d gaussian", + "ar", "body" ], "citations": 0, @@ -169,11 +169,11 @@ ], "github_url": "https://github.com/YkiWu/EmbodiedOcc", "keywords": [ - "human", - "ar", "3d gaussian", - "efficient", + "human", "understanding", + "efficient", + "ar", "semantic" ], "citations": 0, @@ -205,10 +205,10 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "fast", - "3d gaussian" + "3d gaussian", + "ar", + "dynamic" ], "citations": 0, "semantic_url": "" @@ -232,10 +232,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", "ar", - "localization", - "3d gaussian" + "localization" ], "citations": 0, "semantic_url": "" @@ -258,13 +259,13 @@ ], "github_url": "", "keywords": [ + "geometry", "dynamic", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "fast", - "face" + "ar", + "face", + "fast" ], "citations": 0, "semantic_url": "" @@ -291,10 +292,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "3d gaussian", "outdoor", - "3d gaussian" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -326,10 +327,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", + "3d gaussian", "motion", - "3d gaussian" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -351,8 +352,8 @@ ], "github_url": "https://github.com/ICB-Vision-AI/DenseRSLF", "keywords": [ - "ar", "motion", + "ar", "deformation" ], "citations": 0, @@ -381,11 +382,11 @@ "keywords": [ "dynamic", "gaussian splatting", + "deformation", + "urban scene", "4d", "ar", - "urban scene", "face", - "deformation", "semantic" ], "citations": 0, @@ -411,11 +412,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", - "face", - "3d gaussian", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -437,10 +438,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "face", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -464,10 +465,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "neural rendering", "3d gaussian", - "neural rendering" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -491,15 +492,16 @@ "github_url": "https://github.com/iis-esslingen/nerf-3dgs-benchmark", "keywords": [ "dynamic", + "outdoor", + "slam", + "lighting", "gaussian splatting", - "ar", - "nerf", - "localization", "tracking", "mapping", + "nerf", "understanding", - "slam", - "outdoor" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -527,13 +529,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "fast", - "3d gaussian", "efficient", - "mapping" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -555,12 +557,12 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", "motion", - "high-fidelity" + "dynamic", + "gaussian splatting", + "high-fidelity", + "ar" ], "citations": 0, "semantic_url": "" @@ -583,13 +585,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "human", + "robotics", "ar", - "fast", - "3d gaussian", "autonomous driving", - "robotics" + "fast" ], "citations": 0, "semantic_url": "" @@ -622,13 +624,13 @@ "keywords": [ "animation", "gaussian splatting", - "4d", - "human", "3d reconstruction", - "real-time rendering", - "ar", + "human", "avatar", - "efficient" + "4d", + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -656,12 +658,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", + "mapping", "human", - "ar", "avatar", - "motion", - "3d gaussian", - "mapping" + "ar" ], "citations": 0, "semantic_url": "" @@ -688,11 +690,11 @@ "github_url": "https://github.com/gqk/RelayGS", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "4d", "ar", - "motion", - "3d gaussian", "compact" ], "citations": 0, @@ -716,14 +718,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "motion", + "head", "animation", + "gaussian splatting", + "deformation", "human", - "ar", "avatar", - "motion", - "deformation", - "head", + "ar", "compact" ], "citations": 0, @@ -747,13 +749,13 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", - "motion", "3d gaussian", + "motion", + "dynamic", + "high-fidelity", "deformation", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -780,11 +782,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "tracking" + "gaussian splatting", + "tracking", + "ar" ], "citations": 0, "semantic_url": "" @@ -811,14 +813,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "3d reconstruction", - "high quality", + "ar", "face", - "3d gaussian", - "semantic" + "semantic", + "high quality" ], "citations": 0, "semantic_url": "" @@ -841,10 +843,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "sparse view", + "gaussian splatting", "understanding", + "ar", "semantic" ], "citations": 0, @@ -867,14 +869,14 @@ ], "github_url": "https://github.com/iCVTEAM/IPSM", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", + "sparse view", "geometry", "3d gaussian", - "sparse view", - "sparse-view", - "semantic" + "gaussian splatting", + "3d reconstruction", + "ar", + "semantic", + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -904,14 +906,14 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", - "fast", "3d gaussian", + "gaussian splatting", + "human", "efficient", + "ar", + "semantic", "sparse-view", - "semantic" + "fast" ], "citations": 0, "semantic_url": "" @@ -936,13 +938,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", + "gaussian splatting", + "vr", "efficient", - "vr" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -965,13 +967,13 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", "segmentation", "ar", - "geometry", - "fast", "face", - "neural rendering" + "neural rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -995,12 +997,12 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", + "high-fidelity", "ar", - "geometry", "face", - "neural rendering", - "high-fidelity" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -1022,14 +1024,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", - "ar", "3d reconstruction", - "3d gaussian", - "efficient", "understanding", - "semantic" + "efficient", + "ar", + "semantic", + "compression" ], "citations": 0, "semantic_url": "" @@ -1052,10 +1054,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", - "tracking" + "gaussian splatting", + "tracking", + "ar" ], "citations": 0, "semantic_url": "" @@ -1082,11 +1084,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "high-fidelity", "urban scene", - "3d gaussian", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -1115,9 +1117,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "gaussian splatting", "ar", - "3d gaussian", "autonomous driving" ], "citations": 0, @@ -1141,9 +1144,9 @@ ], "github_url": "", "keywords": [ + "ar", "gaussian splatting", "face", - "ar", "high-fidelity" ], "citations": 0, @@ -1164,8 +1167,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "high-fidelity" ], "citations": 0, @@ -1194,8 +1197,8 @@ "keywords": [ "gaussian splatting", "segmentation", - "ar", "efficient", + "ar", "semantic" ], "citations": 0, @@ -1216,11 +1219,11 @@ ], "github_url": "https://github.com/jibo27/3DGS_Hierarchical_Training", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", "4d", - "ar", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -1272,12 +1275,12 @@ "keywords": [ "dynamic", "gaussian splatting", - "ar", - "fast", + "tracking", + "robotics", "efficient", + "ar", "autonomous driving", - "tracking", - "robotics" + "fast" ], "citations": 0, "semantic_url": "" @@ -1304,9 +1307,9 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -1331,10 +1334,10 @@ "github_url": "", "keywords": [ "gaussian splatting", + "high-fidelity", "ar", - "face", "efficient", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -1357,10 +1360,10 @@ ], "github_url": "https://github.com/zhenzhongcao/RGBDS-SLAM", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", "slam", + "gaussian splatting", + "ar", "semantic" ], "citations": 0, @@ -1386,10 +1389,10 @@ ], "github_url": "", "keywords": [ + "body", "dynamic", - "avatar", "ar", - "body" + "avatar" ], "citations": 0, "semantic_url": "" @@ -1415,12 +1418,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "head", - "efficient" + "lighting", + "gaussian splatting", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -1444,13 +1448,13 @@ ], "github_url": "", "keywords": [ + "sparse view", "dynamic", - "gaussian splatting", - "ar", - "motion", "3d gaussian", - "sparse view", - "high-fidelity" + "motion", + "gaussian splatting", + "high-fidelity", + "ar" ], "citations": 0, "semantic_url": "" @@ -1474,11 +1478,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "gaussian splatting", + "vr", "deformation", - "vr" + "ar" ], "citations": 0, "semantic_url": "" @@ -1499,12 +1503,12 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "3d gaussian", - "understanding" + "human", + "understanding", + "ar" ], "citations": 0, "semantic_url": "" @@ -1525,15 +1529,15 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", + "slam", "gaussian splatting", - "ar", + "tracking", "3d reconstruction", - "fast", - "3d gaussian", "efficient", - "sparse view", - "tracking", - "slam" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -1557,8 +1561,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "3d gaussian" + "lighting" ], "citations": 0, "semantic_url": "" @@ -1583,12 +1588,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", "ar", - "fast", "real-time rendering", - "nerf", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -1613,9 +1618,9 @@ ], "github_url": "", "keywords": [ - "ar", + "nerf", "fast", - "nerf" + "ar" ], "citations": 0, "semantic_url": "" @@ -1636,15 +1641,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", - "face", - "localization", "3d gaussian", + "gaussian splatting", + "mapping", + "3d reconstruction", "efficient", - "mapping" + "ar", + "face", + "localization" ], "citations": 0, "semantic_url": "" @@ -1670,15 +1675,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "recognition", "segmentation", - "ar", - "real-time rendering", - "face", - "3d gaussian", "understanding", + "ar", "semantic", - "recognition" + "face", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -1698,14 +1703,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", + "deformation", "4d", - "ar", - "geometry", - "motion", - "3d gaussian", - "deformation" + "ar" ], "citations": 0, "semantic_url": "" @@ -1729,11 +1734,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -1758,12 +1763,12 @@ ], "github_url": "", "keywords": [ + "head", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "head", - "semantic" + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -1791,12 +1796,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -1820,11 +1825,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "urban scene", - "face", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -1846,12 +1851,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", "nerf", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -1876,12 +1881,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "segmentation", "understanding", + "ar", "semantic" ], "citations": 0, @@ -1919,10 +1924,10 @@ ], "github_url": "", "keywords": [ - "4d", - "ar", "nerf", - "autonomous driving" + "autonomous driving", + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -1947,10 +1952,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "segmentation", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -1976,12 +1981,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "real-time rendering", "face", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -2004,9 +2009,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "ray tracing", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -2032,10 +2038,10 @@ "github_url": "", "keywords": [ "segmentation", - "ar", "nerf", - "face", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -2061,9 +2067,11 @@ ], "github_url": "", "keywords": [ + "lighting", + "high-fidelity", "urban scene", + "relighting", "ar", - "high-fidelity", "autonomous driving" ], "citations": 0, @@ -2089,12 +2097,12 @@ "dynamic", "gaussian splatting", "segmentation", - "ar", "3d reconstruction", - "fast", - "autonomous driving", "understanding", - "semantic" + "ar", + "semantic", + "autonomous driving", + "fast" ], "citations": 0, "semantic_url": "" @@ -2119,10 +2127,10 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", + "ar", "face" ], "citations": 0, @@ -2151,12 +2159,12 @@ "github_url": "", "keywords": [ "dynamic", - "ar", - "fast", - "localization", "3d gaussian", + "outdoor", "robotics", - "outdoor" + "ar", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -2181,15 +2189,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", "segmentation", - "ar", - "3d gaussian", - "efficient", - "autonomous driving", - "robotics", "understanding", - "semantic" + "robotics", + "efficient", + "ar", + "semantic", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -2211,11 +2220,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "animation", "gaussian splatting", - "ar", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -2245,10 +2254,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "lighting", "gaussian splatting", - "compact", "ar", - "geometry" + "compact" ], "citations": 0, "semantic_url": "" @@ -2271,10 +2281,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -2303,12 +2313,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "fast", - "3d gaussian", - "mapping" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -2334,9 +2344,9 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "4d", - "ar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -2364,18 +2374,18 @@ ], "github_url": "", "keywords": [ - "animation", + "geometry", + "3d gaussian", + "motion", + "head", "gaussian splatting", + "animation", + "high-fidelity", "human", - "ar", - "geometry", "avatar", - "motion", - "3d gaussian", "efficient", - "head", - "compact", - "high-fidelity" + "ar", + "compact" ], "citations": 0, "semantic_url": "" @@ -2398,9 +2408,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -2423,12 +2433,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -2449,12 +2459,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", "fast", - "3d gaussian", - "compact" + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -2480,11 +2490,11 @@ ], "github_url": "https://github.com/WJakubowska/NeuralSurfacePriors", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -2509,13 +2519,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "human", - "ar", - "face", "avatar", - "3d gaussian", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -2537,9 +2547,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "head", "ar", - "head" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -2561,12 +2571,12 @@ ], "github_url": "https://github.com/JiaxiongQ/GLS", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "segmentation", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -2591,13 +2601,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", + "slam", + "gaussian splatting", "efficient", - "slam" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -2618,12 +2628,12 @@ ], "github_url": "https://github.com/ChenHoy/DROID-Splat", "keywords": [ - "gaussian splatting", - "ar", - "fast", "3d gaussian", + "slam", + "gaussian splatting", "tracking", - "slam" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -2645,9 +2655,9 @@ ], "github_url": "https://github.com/bbbbby-99/DGGS", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -2676,11 +2686,11 @@ "github_url": "", "keywords": [ "dynamic", - "human", - "animation", - "ar", "3d gaussian", - "efficient" + "animation", + "human", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -2706,11 +2716,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -2735,11 +2745,11 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", + "3d gaussian", "motion", - "3d gaussian" + "dynamic", + "ar" ], "citations": 0, "semantic_url": "" @@ -2767,9 +2777,9 @@ "dynamic", "gaussian splatting", "4d", + "efficient", "ar", - "fast", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -2792,11 +2802,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "face", - "3d gaussian", - "efficient" + "face" ], "citations": 0, "semantic_url": "" @@ -2817,12 +2827,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "head", "3d gaussian", - "efficient" + "gaussian splatting", + "3d reconstruction", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -2847,13 +2857,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "autonomous driving", + "nerf", "ar", "real-time rendering", - "nerf", - "3d gaussian", - "neural rendering", - "autonomous driving" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -2877,12 +2887,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", "3d gaussian", + "gaussian splatting", + "high-fidelity", "efficient", - "high-fidelity" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -2906,13 +2916,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", - "face", "3d gaussian", - "efficient" + "gaussian splatting", + "nerf", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -2935,15 +2945,15 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "localization", "3d gaussian", - "autonomous driving", + "slam", "tracking", - "robotics", "mapping", - "slam" + "robotics", + "ar", + "autonomous driving", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -2967,15 +2977,15 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "high-fidelity", + "deformation", "3d reconstruction", + "ar", "real-time rendering", - "fast", - "motion", - "3d gaussian", - "deformation", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -3000,11 +3010,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "fast", - "3d gaussian", - "sparse-view" + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -3026,14 +3036,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "animation", + "high-fidelity", "human", - "ar", "avatar", - "motion", - "3d gaussian", - "body", - "high-fidelity" + "ar", + "body" ], "citations": 0, "semantic_url": "" @@ -3062,9 +3072,9 @@ "github_url": "", "keywords": [ "gaussian splatting", + "high-fidelity", "ar", "efficient", - "high-fidelity", "semantic" ], "citations": 0, @@ -3090,17 +3100,17 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "human", - "ar", "geometry", - "localization", - "motion", "3d gaussian", + "motion", + "dynamic", + "gaussian splatting", + "slam", "deformation", "mapping", - "slam" + "human", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -3124,12 +3134,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "motion", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "nerf", + "ar" ], "citations": 0, "semantic_url": "" @@ -3155,12 +3165,12 @@ ], "github_url": "", "keywords": [ - "human", + "motion", + "3d gaussian", "gaussian splatting", - "ar", + "human", "avatar", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -3182,12 +3192,12 @@ ], "github_url": "", "keywords": [ + "head", "dynamic", "gaussian splatting", - "ar", + "tracking", "avatar", - "head", - "tracking" + "ar" ], "citations": 0, "semantic_url": "" @@ -3209,15 +3219,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", + "high-fidelity", "3d reconstruction", - "fast", "nerf", - "geometry", + "ar", "face", - "3d gaussian", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -3244,10 +3254,10 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", "4d", "ar", - "motion", "autonomous driving" ], "citations": 0, @@ -3272,10 +3282,10 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", "4d", "ar", - "motion", "autonomous driving" ], "citations": 0, @@ -3303,14 +3313,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "face", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "ar", + "face", + "localization" ], "citations": 0, "semantic_url": "" @@ -3336,13 +3346,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", - "face", "3d gaussian", - "efficient" + "gaussian splatting", + "nerf", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -3369,15 +3379,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "urban scene", + "understanding", "ar", "real-time rendering", - "fast", - "urban scene", - "3d gaussian", + "semantic", "autonomous driving", - "understanding", - "semantic" + "fast" ], "citations": 0, "semantic_url": "" @@ -3401,12 +3411,12 @@ ], "github_url": "https://github.com/insait-institute/N4DE", "keywords": [ + "geometry", "gaussian splatting", + "deformation", "4d", "ar", - "geometry", - "face", - "deformation" + "face" ], "citations": 0, "semantic_url": "" @@ -3433,13 +3443,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", + "efficient", + "ar", "face", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -3465,12 +3475,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "4d", "ar", "real-time rendering", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -3500,14 +3510,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", + "autonomous driving", "segmentation", - "ar", - "motion", - "3d gaussian", "efficient", - "neural rendering", - "autonomous driving" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -3531,9 +3541,9 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "4d", - "ar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -3565,10 +3575,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "fast", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -3590,11 +3600,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "high quality", "fast", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -3617,11 +3627,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "lighting", + "gaussian splatting", + "mapping", "efficient", - "mapping" + "ar" ], "citations": 0, "semantic_url": "" @@ -3645,9 +3656,9 @@ "gaussian splatting", "segmentation", "ar", + "semantic", "fast", - "localization", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -3673,11 +3684,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "understanding" + "gaussian splatting", + "understanding", + "ar" ], "citations": 0, "semantic_url": "" @@ -3701,14 +3712,14 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", + "high-fidelity", "nerf", - "face", "avatar", - "3d gaussian", - "head", - "high-fidelity" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -3728,8 +3739,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "localization" ], "citations": 0, @@ -3754,9 +3765,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "relightable", "gaussian splatting", + "lighting", + "relighting", "ar", - "3d gaussian" + "illumination" ], "citations": 0, "semantic_url": "" @@ -3779,11 +3794,11 @@ "github_url": "", "keywords": [ "dynamic", - "ar", - "nerf", - "face", "3d gaussian", - "efficient" + "nerf", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -3808,15 +3823,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "human", - "segmentation", - "ar", "geometry", "motion", + "dynamic", + "gaussian splatting", "deformation", - "efficient" + "segmentation", + "human", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -3840,12 +3855,12 @@ ], "github_url": "https://github.com/Public-BOTs/GaussianPretrain", "keywords": [ - "ar", - "fast", - "nerf", "3d gaussian", + "nerf", + "understanding", + "ar", "autonomous driving", - "understanding" + "fast" ], "citations": 0, "semantic_url": "" @@ -3867,11 +3882,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", "ar", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -3896,12 +3911,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", "real-time rendering", - "fast", - "3d gaussian", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -3921,11 +3936,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", + "efficient", "ar", - "geometry", - "fast", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -3947,17 +3962,17 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", + "slam", "gaussian splatting", + "tracking", + "mapping", "segmentation", "ar", + "semantic", "fast", - "mapping", - "localization", - "3d gaussian", - "tracking", - "outdoor", - "slam", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -3980,11 +3995,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "deformation", "3d gaussian", + "gaussian splatting", + "deformation", + "ar", "semantic" ], "citations": 0, @@ -4007,11 +4022,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -4039,11 +4054,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", - "3d gaussian", - "efficient", - "high-fidelity" + "efficient" ], "citations": 0, "semantic_url": "" @@ -4070,12 +4085,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "face", + "3d gaussian", "motion", + "gaussian splatting", "deformation", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -4101,13 +4116,13 @@ ], "github_url": "", "keywords": [ - "human", + "sparse view", + "geometry", + "3d gaussian", "gaussian splatting", + "human", "ar", - "geometry", "real-time rendering", - "3d gaussian", - "sparse view", "sparse-view" ], "citations": 0, @@ -4136,15 +4151,15 @@ "github_url": "https://github.com/chengweialan/DeSiRe-GS", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "high-fidelity", "4d", + "efficient", "ar", "face", - "motion", - "3d gaussian", - "efficient", - "autonomous driving", - "high-fidelity" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -4169,10 +4184,10 @@ "github_url": "https://github.com/gmum/VeGaS", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "compression", "ar", - "3d gaussian" + "compression" ], "citations": 0, "semantic_url": "" @@ -4197,14 +4212,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "geometry", - "high quality", - "fast", "face", - "3d gaussian", - "efficient" + "fast", + "high quality" ], "citations": 0, "semantic_url": "" @@ -4227,12 +4242,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "slam", "gaussian splatting", + "tracking", "segmentation", "ar", - "3d gaussian", - "tracking", - "slam" + "shadow" ], "citations": 0, "semantic_url": "" @@ -4254,11 +4270,11 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", "motion", - "sparse-view", - "semantic" + "3d reconstruction", + "ar", + "semantic", + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -4284,14 +4300,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", + "slam", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", "nerf", - "motion", - "3d gaussian", - "mapping", - "slam" + "ar" ], "citations": 0, "semantic_url": "" @@ -4316,12 +4332,12 @@ ], "github_url": "https://github.com/chenkang455/USP-Gaussian", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -4345,10 +4361,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -4372,13 +4388,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", "ar", + "semantic", "real-time rendering", - "nerf", - "localization", - "3d gaussian", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -4399,12 +4415,13 @@ ], "github_url": "https://github.com/J-X-Chen/GGAvatar/", "keywords": [ + "3d gaussian", + "lighting", "animation", "gaussian splatting", "human", - "ar", "avatar", - "3d gaussian", + "ar", "body" ], "citations": 0, @@ -4429,11 +4446,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "recognition", "survey", - "recognition" + "ar", + "illumination" ], "citations": 0, "semantic_url": "" @@ -4455,10 +4473,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "ar", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -4481,12 +4499,12 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", + "few-shot", "4d", "ar", - "fast", - "few-shot", - "motion" + "fast" ], "citations": 0, "semantic_url": "" @@ -4510,12 +4528,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "gaussian splatting", - "ar", + "high-fidelity", "avatar", - "3d gaussian", - "neural rendering", - "high-fidelity" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -4537,12 +4556,12 @@ ], "github_url": "", "keywords": [ + "outdoor", "gaussian splatting", - "compression", - "ar", "nerf", "efficient", - "outdoor" + "ar", + "compression" ], "citations": 0, "semantic_url": "" @@ -4568,14 +4587,14 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "3d reconstruction", "nerf", - "motion", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -4598,19 +4617,19 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", + "slam", + "high-fidelity", + "tracking", + "mapping", "ar", - "geometry", "real-time rendering", "face", - "localization", - "motion", - "3d gaussian", - "tracking", - "mapping", - "slam", - "high-fidelity" + "localization" ], "citations": 0, "semantic_url": "" @@ -4634,15 +4653,16 @@ ], "github_url": "https://github.com/WU-CVGL/MBA-SLAM", "keywords": [ - "gaussian splatting", - "ar", - "nerf", - "localization", "motion", "3d gaussian", - "efficient", + "slam", + "lighting", + "gaussian splatting", "mapping", - "slam" + "nerf", + "efficient", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -4664,10 +4684,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "real-time rendering", - "3d gaussian" + "gaussian splatting", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -4688,10 +4708,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "segmentation", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -4715,13 +4735,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "efficient", "ar", "real-time rendering", "face", - "motion", - "3d gaussian", - "efficient", "compact" ], "citations": 0, @@ -4745,10 +4765,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", + "relighting", "ar", + "illumination", "face", - "3d gaussian", "shape reconstruction" ], "citations": 0, @@ -4773,11 +4796,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", - "3d gaussian", - "compact" + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -4801,13 +4824,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", + "high-fidelity", "ar", - "geometry", - "face", - "3d gaussian", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -4831,10 +4854,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "sparse view", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -4859,13 +4882,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", - "face", "3d gaussian", - "head" + "head", + "gaussian splatting", + "nerf", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -4889,9 +4912,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "nerf", "3d gaussian", - "nerf" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -4914,11 +4937,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", "efficient", + "ar", "compact" ], "citations": 0, @@ -4942,11 +4965,12 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "face", + "3d gaussian", "head", - "3d gaussian" + "ar", + "face", + "reflection" ], "citations": 0, "semantic_url": "" @@ -4966,10 +4990,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "3d gaussian", "high quality", - "3d gaussian" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -4994,12 +5018,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", + "sparse view", "geometry", "3d gaussian", - "sparse view" + "gaussian splatting", + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -5021,10 +5045,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "lightweight", + "3d gaussian", "ar", - "3d gaussian" + "lightweight", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -5046,12 +5070,12 @@ ], "github_url": "https://github.com/520xyxyzq/3DGS-CD", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -5075,15 +5099,15 @@ ], "github_url": "https://github.com/prstrive/SCGaussian", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "3d gaussian", + "large scene", + "gaussian splatting", "nerf", "few-shot", - "face", - "large scene", - "3d gaussian", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -5107,10 +5131,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", - "tracking" + "gaussian splatting", + "tracking", + "ar" ], "citations": 0, "semantic_url": "" @@ -5135,11 +5159,11 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "human", "efficient", + "ar", "body" ], "citations": 0, @@ -5161,14 +5185,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "3d gaussian", + "slam", + "gaussian splatting", + "high-fidelity", "mapping", + "3d reconstruction", "acceleration", - "slam", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -5196,12 +5220,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", - "ar", "3d gaussian", + "gaussian splatting", "mapping", + "segmentation", "understanding", + "ar", "semantic" ], "citations": 0, @@ -5227,8 +5251,8 @@ ], "github_url": "", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -5251,12 +5275,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "3d gaussian", + "gaussian splatting", "nerf", "few-shot", - "3d gaussian", + "ar", "semantic" ], "citations": 0, @@ -5278,15 +5302,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", + "nerf", + "efficient", "ar", "real-time rendering", - "nerf", - "fast", "face", - "3d gaussian", - "efficient", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -5308,11 +5332,12 @@ ], "github_url": "", "keywords": [ + "geometry", "dynamic", + "lighting", "gaussian splatting", "4d", - "ar", - "geometry" + "ar" ], "citations": 0, "semantic_url": "" @@ -5335,14 +5360,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", - "ar", "geometry", - "face", "3d gaussian", + "gaussian splatting", + "high-fidelity", "efficient", - "high-fidelity" + "ar", + "face", + "compression" ], "citations": 0, "semantic_url": "" @@ -5370,8 +5395,8 @@ ], "github_url": "", "keywords": [ - "dynamic", "understanding", + "dynamic", "ar" ], "citations": 0, @@ -5395,11 +5420,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "real-time rendering", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -5423,14 +5448,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", - "geometry", "nerf", "few-shot", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -5457,13 +5482,17 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", + "relightable", "human", - "ar", - "real-time rendering", "avatar", - "3d gaussian", "efficient", - "head" + "light transport", + "illumination", + "real-time rendering", + "ar", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -5488,9 +5517,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -5513,12 +5542,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", - "high-fidelity" + "lighting", + "gaussian splatting", + "high-fidelity", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -5542,10 +5572,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -5567,9 +5597,9 @@ ], "github_url": "", "keywords": [ - "ar", + "nerf", "slam", - "nerf" + "ar" ], "citations": 0, "semantic_url": "" @@ -5590,11 +5620,11 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "motion", - "3d gaussian" + "3d gaussian", + "gaussian splatting", + "human", + "ar" ], "citations": 0, "semantic_url": "" @@ -5615,11 +5645,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", "fast", - "3d gaussian" + "compression" ], "citations": 0, "semantic_url": "" @@ -5641,11 +5671,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", "efficient", + "ar", "sparse-view" ], "citations": 0, @@ -5669,10 +5699,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "geometry", "ar", - "3d reconstruction", - "geometry" + "gaussian splatting", + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -5697,13 +5727,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", - "fast", "3d gaussian", - "lightweight" + "gaussian splatting", + "3d reconstruction", + "ar", + "lightweight", + "fast" ], "citations": 0, "semantic_url": "" @@ -5731,9 +5761,9 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "ar", - "motion", - "3d gaussian" + "motion" ], "citations": 0, "semantic_url": "" @@ -5759,10 +5789,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "ar", - "efficient", + "high-fidelity", "mapping", - "high-fidelity" + "ar", + "efficient" ], "citations": 0, "semantic_url": "" @@ -5783,12 +5813,12 @@ ], "github_url": "https://github.com/Pixie8888/MVSDet", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", "head", - "efficient" + "gaussian splatting", + "nerf", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -5811,12 +5841,12 @@ "github_url": "", "keywords": [ "dynamic", + "head", "gaussian splatting", - "4d", - "ar", + "high-fidelity", "deformation", - "head", - "high-fidelity" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -5844,8 +5874,8 @@ "dynamic", "gaussian splatting", "human", - "ar", - "avatar" + "avatar", + "ar" ], "citations": 0, "semantic_url": "" @@ -5873,10 +5903,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", "efficient", + "ar", "semantic" ], "citations": 0, @@ -5899,14 +5929,14 @@ ], "github_url": "https://github.com/esw0116/ODGS", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "real-time rendering", "nerf", - "fast", - "3d gaussian", - "head" + "ar", + "real-time rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -5929,13 +5959,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "lighting", + "high-fidelity", "ar", - "geometry", + "illumination", "real-time rendering", - "face", - "3d gaussian", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -5964,18 +5996,19 @@ ], "github_url": "", "keywords": [ + "geometry", "dynamic", "gaussian splatting", - "ar", + "lighting", + "high-fidelity", + "survey", "3d reconstruction", - "geometry", "nerf", - "autonomous driving", - "survey", "robotics", - "compact", - "high-fidelity", - "semantic" + "ar", + "semantic", + "autonomous driving", + "compact" ], "citations": 0, "semantic_url": "" @@ -6004,10 +6037,10 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", "geometry", "3d gaussian", + "3d reconstruction", + "ar", "semantic" ], "citations": 0, @@ -6029,11 +6062,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", - "3d gaussian", - "efficient", - "high-fidelity" + "efficient" ], "citations": 0, "semantic_url": "" @@ -6054,10 +6087,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "fast", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -6080,11 +6113,11 @@ ], "github_url": "https://github.com/WeihangLiu2024/Content_Aware_NeRF", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" - ], + "ar", + "gaussian splatting" + ], "citations": 0, "semantic_url": "" }, @@ -6106,9 +6139,9 @@ ], "github_url": "", "keywords": [ + "ar", "gaussian splatting", - "face", - "ar" + "face" ], "citations": 0, "semantic_url": "" @@ -6135,11 +6168,11 @@ ], "github_url": "https://github.com/Barrybarry-Smith/PixelGaussian", "keywords": [ - "dynamic", - "ar", "geometry", "3d gaussian", - "efficient" + "dynamic", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -6168,12 +6201,12 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", + "avatar", "ar", - "geometry", - "high quality", "fast", - "avatar" + "high quality" ], "citations": 0, "semantic_url": "" @@ -6199,12 +6232,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", "head", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -6228,11 +6261,11 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "motion", "3d gaussian", - "tracking" + "motion", + "gaussian splatting", + "tracking", + "ar" ], "citations": 0, "semantic_url": "" @@ -6254,11 +6287,11 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", - "sparse view" + "ar" ], "citations": 0, "semantic_url": "" @@ -6281,13 +6314,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", - "fast", + "vr", "nerf", - "3d gaussian", - "vr" + "human", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -6309,12 +6342,12 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "nerf", - "motion", - "3d gaussian", + "ar", "semantic" ], "citations": 0, @@ -6338,13 +6371,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "motion", "3d gaussian", + "gaussian splatting", "mapping", - "semantic" + "ar", + "semantic", + "localization" ], "citations": 0, "semantic_url": "" @@ -6369,15 +6402,16 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", + "lighting", "gaussian splatting", - "ar", + "deformation", "3d reconstruction", - "geometry", - "face", - "motion", - "3d gaussian", - "deformation" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -6406,10 +6440,10 @@ ], "github_url": "", "keywords": [ - "ar", - "sparse-view", + "nerf", "fast", - "nerf" + "ar", + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -6436,15 +6470,16 @@ ], "github_url": "https://github.com/MasterHow/E-3DGS", "keywords": [ + "head", + "3d gaussian", + "motion", "gaussian splatting", - "ar", "3d reconstruction", - "fast", "nerf", + "ar", + "illumination", "face", - "motion", - "3d gaussian", - "head" + "fast" ], "citations": 0, "semantic_url": "" @@ -6468,13 +6503,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "head", - "efficient", + "path tracing", + "medical", + "gaussian splatting", "vr", "understanding", - "medical" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -6497,9 +6533,9 @@ ], "github_url": "", "keywords": [ - "deformation", + "3d gaussian", "ar", - "3d gaussian" + "deformation" ], "citations": 0, "semantic_url": "" @@ -6521,11 +6557,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", - "3d gaussian", - "efficient", - "high-fidelity" + "efficient" ], "citations": 0, "semantic_url": "" @@ -6551,15 +6587,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", + "efficient", "ar", - "geometry", "real-time rendering", - "motion", - "efficient", - "compact", - "semantic" + "semantic", + "compact" ], "citations": 0, "semantic_url": "" @@ -6585,9 +6621,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -6612,13 +6648,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "tracking", "4d", "ar", - "fast", - "motion", - "3d gaussian", - "tracking" + "fast" ], "citations": 0, "semantic_url": "" @@ -6642,11 +6678,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -6669,13 +6705,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "3d gaussian", + "outdoor", + "gaussian splatting", "mapping", + "nerf", "acceleration", - "outdoor" + "ar" ], "citations": 0, "semantic_url": "" @@ -6698,12 +6734,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", - "ar", "geometry", "3d gaussian", - "efficient" + "lighting", + "gaussian splatting", + "segmentation", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -6724,12 +6761,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", "ar", - "geometry", "face", - "3d gaussian", "neural rendering" ], "citations": 0, @@ -6759,15 +6796,15 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "4d", - "ar", - "nerf", "motion", + "head", + "gaussian splatting", + "high-fidelity", "deformation", + "nerf", + "4d", "efficient", - "head", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -6792,9 +6829,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -6818,9 +6855,9 @@ ], "github_url": "", "keywords": [ - "body", + "robotics", "ar", - "robotics" + "body" ], "citations": 0, "semantic_url": "" @@ -6851,13 +6888,13 @@ "keywords": [ "dynamic", "gaussian splatting", + "deformation", "4d", - "compression", + "efficient", + "lightweight", "ar", "face", - "deformation", - "efficient", - "lightweight" + "compression" ], "citations": 0, "semantic_url": "" @@ -6883,11 +6920,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "4d", - "ar", "nerf", - "3d gaussian" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -6918,12 +6955,12 @@ "github_url": "", "keywords": [ "dynamic", + "nerf", + "acceleration", "4d", "ar", - "nerf", "face", - "autonomous driving", - "acceleration" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -6949,10 +6986,10 @@ ], "github_url": "", "keywords": [ - "ar", "large scene", "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -6979,14 +7016,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "high-fidelity", "segmentation", - "ar", - "geometry", "nerf", - "face", - "3d gaussian", - "high-fidelity" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -7010,8 +7047,8 @@ ], "github_url": "https://github.com/Bistu3DV/hybridBA", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -7035,10 +7072,10 @@ ], "github_url": "https://github.com/jwubz123/UNIG", "keywords": [ + "3d gaussian", "ar", "3d reconstruction", - "high-fidelity", - "3d gaussian" + "high-fidelity" ], "citations": 0, "semantic_url": "" @@ -7064,10 +7101,10 @@ ], "github_url": "", "keywords": [ - "ar", "large scene", "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -7096,8 +7133,8 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", + "ar", "semantic" ], "citations": 0, @@ -7120,14 +7157,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", - "nerf", "3d gaussian", + "gaussian splatting", "survey", + "nerf", + "understanding", "robotics", - "understanding" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -7149,12 +7186,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", "ar", "fast", - "localization", - "motion", - "3d gaussian" + "localization" ], "citations": 0, "semantic_url": "" @@ -7177,13 +7214,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "few-shot", - "localization", "motion", "3d gaussian", - "mapping" + "gaussian splatting", + "mapping", + "few-shot", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -7209,10 +7246,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "lighting", "gaussian splatting", + "relighting", + "efficient", + "shadow", + "illumination", "ar", - "geometry", - "efficient" + "global illumination" ], "citations": 0, "semantic_url": "" @@ -7236,13 +7278,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", + "gaussian splatting", "efficient", + "ar", "sparse-view", + "fast", "compact" ], "citations": 0, @@ -7267,15 +7309,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam", - "compact" + "ar", + "face", + "compact", + "localization" ], "citations": 0, "semantic_url": "" @@ -7298,12 +7340,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "large scene", "motion", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -7325,10 +7367,10 @@ ], "github_url": "https://github.com/raja-kumar/depth-aware-3DGS", "keywords": [ - "gaussian splatting", - "ar", + "few-shot", "3d gaussian", - "few-shot" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -7353,14 +7395,14 @@ "github_url": "", "keywords": [ "dynamic", - "human", - "ar", "3d reconstruction", "nerf", - "face", - "autonomous driving", + "human", + "understanding", "robotics", - "understanding" + "ar", + "face", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -7385,12 +7427,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "understanding", "4d", "ar", "face", - "3d gaussian", - "understanding", "semantic" ], "citations": 0, @@ -7415,9 +7457,9 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", "4d", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -7490,13 +7532,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "efficient", - "vr", + "gaussian splatting", "tracking", - "understanding" + "vr", + "understanding", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -7521,15 +7563,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", - "ar", + "deformation", "3d reconstruction", - "geometry", - "fast", - "motion", - "3d gaussian", - "deformation" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -7552,17 +7594,17 @@ ], "github_url": "", "keywords": [ - "animation", - "gaussian splatting", - "ar", "geometry", - "nerf", - "face", "3d gaussian", - "neural rendering", - "vr", "outdoor", - "high-fidelity" + "gaussian splatting", + "animation", + "high-fidelity", + "vr", + "nerf", + "ar", + "face", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -7587,11 +7629,11 @@ ], "github_url": "https://github.com/XuanHuang0/GuassianHand", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "avatar", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -7612,12 +7654,12 @@ ], "github_url": "https://github.com/Schmiddo/noposegs", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -7649,14 +7691,14 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", + "human", "efficient", - "sparse-view" + "ar", + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -7682,10 +7724,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "face", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -7717,11 +7759,15 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "gaussian splatting", + "high-fidelity", + "relighting", "ar", + "shadow", "face", - "3d gaussian", - "high-fidelity" + "reflection" ], "citations": 0, "semantic_url": "" @@ -7750,12 +7796,17 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "nerf", "3d gaussian", + "relightable", + "lighting", + "high-fidelity", + "nerf", + "relighting", + "ar", + "illumination", "autonomous driving", - "high-fidelity" + "global illumination" ], "citations": 0, "semantic_url": "" @@ -7782,10 +7833,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "human", - "ar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -7813,11 +7864,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -7839,14 +7890,14 @@ "github_url": "https://github.com/wu-cvgl/IncEventGS", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "nerf", - "motion", "3d gaussian", + "motion", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "nerf", + "ar" ], "citations": 0, "semantic_url": "" @@ -7870,12 +7921,12 @@ ], "github_url": "https://github.com/YihangChen-ee/FCGS", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", + "high-fidelity", "ar", "fast", - "3d gaussian", - "high-fidelity" + "compression" ], "citations": 0, "semantic_url": "" @@ -7896,11 +7947,11 @@ ], "github_url": "https://github.com/xg-chu/GAGAvatar", "keywords": [ - "ar", - "avatar", "head", "3d gaussian", - "high-fidelity" + "high-fidelity", + "avatar", + "ar" ], "citations": 0, "semantic_url": "" @@ -7920,11 +7971,11 @@ ], "github_url": "", "keywords": [ + "mapping", "ar", - "fast", - "face", "lightweight", - "mapping" + "face", + "fast" ], "citations": 0, "semantic_url": "" @@ -7953,11 +8004,11 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", + "3d gaussian", "motion", + "gaussian splatting", "deformation", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -7985,12 +8036,12 @@ "keywords": [ "gaussian splatting", "segmentation", - "ar", "3d reconstruction", - "autonomous driving", - "robotics", "understanding", - "semantic" + "robotics", + "ar", + "semantic", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -8019,8 +8070,8 @@ "github_url": "", "keywords": [ "understanding", - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -8042,14 +8093,14 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", - "4d", "animation", - "ar", + "deformation", "nerf", - "face", - "motion", - "deformation" + "4d", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -8074,10 +8125,10 @@ ], "github_url": "", "keywords": [ + "mapping", "gaussian splatting", - "ar", "3d reconstruction", - "mapping" + "ar" ], "citations": 0, "semantic_url": "" @@ -8096,12 +8147,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "3d gaussian", + "gaussian splatting", + "lighting", + "high-fidelity", "survey", - "high-fidelity" + "nerf", + "ar" ], "citations": 0, "semantic_url": "" @@ -8126,10 +8178,10 @@ ], "github_url": "https://github.com/zju-bmi-lab/SpikingGS", "keywords": [ - "gaussian splatting", - "face", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -8154,10 +8206,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "sparse-view", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -8189,11 +8241,16 @@ ], "github_url": "", "keywords": [ + "geometry", + "relightable", "gaussian splatting", + "lighting", + "relighting", "ar", - "geometry", - "fast", - "sparse-view" + "shadow", + "illumination", + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -8215,10 +8272,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "localization", - "3d gaussian" + "gaussian splatting", + "localization" ], "citations": 0, "semantic_url": "" @@ -8243,8 +8300,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "semantic" ], "citations": 0, @@ -8266,13 +8323,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "face", - "3d gaussian", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -8297,11 +8354,11 @@ ], "github_url": "", "keywords": [ + "head", "gaussian splatting", - "ar", "nerf", - "head", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -8323,10 +8380,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -8351,13 +8408,13 @@ "github_url": "", "keywords": [ "dynamic", + "large scene", "gaussian splatting", - "ar", + "high-fidelity", "nerf", + "ar", "face", - "large scene", - "autonomous driving", - "high-fidelity" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -8383,10 +8440,10 @@ ], "github_url": "https://github.com/ARCLab-MIT/space-nvs", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "gaussian splatting", + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -8412,9 +8469,9 @@ "github_url": "", "keywords": [ "human", - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -8439,12 +8496,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", + "ray tracing", "ar", - "high quality", "real-time rendering", - "nerf", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -8469,13 +8527,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "few-shot", - "face", - "3d gaussian", "robotics", - "semantic" + "ar", + "semantic", + "face" ], "citations": 0, "semantic_url": "" @@ -8501,10 +8559,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "ar", "neural rendering" ], "citations": 0, @@ -8529,11 +8587,11 @@ ], "github_url": "", "keywords": [ + "sparse view", "gaussian splatting", "ar", "face", - "autonomous driving", - "sparse view" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -8557,10 +8615,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -8587,9 +8645,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian" + "illumination", + "reflection" ], "citations": 0, "semantic_url": "" @@ -8610,13 +8670,19 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "geometry", + "path tracing", "3d gaussian", + "geometry", + "gaussian splatting", + "lighting", + "high-fidelity", + "relighting", "efficient", + "shadow", + "illumination", "lightweight", - "high-fidelity" + "ar", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -8638,11 +8704,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "real-time rendering", "face", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -8663,10 +8729,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", + "ar", "body" ], "citations": 0, @@ -8694,10 +8760,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "nerf" + "gaussian splatting", + "nerf", + "ray tracing", + "ar" ], "citations": 0, "semantic_url": "" @@ -8718,12 +8785,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "face", - "3d gaussian", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -8746,11 +8813,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", + "ar", "face", - "3d gaussian" + "reflection" ], "citations": 0, "semantic_url": "" @@ -8775,13 +8843,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", + "ar", + "semantic", "high quality", - "3d gaussian", - "compact", - "semantic" + "compact" ], "citations": 0, "semantic_url": "" @@ -8805,10 +8873,11 @@ ], "github_url": "", "keywords": [ - "human", "gaussian splatting", - "compression", - "ar" + "human", + "ar", + "reflection", + "compression" ], "citations": 0, "semantic_url": "" @@ -8831,11 +8900,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "ar", - "high quality", - "motion", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -8858,12 +8927,12 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "human", + "efficient", "ar", - "face", - "3d gaussian", - "efficient" + "face" ], "citations": 0, "semantic_url": "" @@ -8884,10 +8953,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "gaussian splatting", "nerf", + "ar", "face" ], "citations": 0, @@ -8913,11 +8982,11 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", + "sparse view", "3d gaussian", + "3d reconstruction", "efficient", - "sparse view" + "ar" ], "citations": 0, "semantic_url": "" @@ -8942,14 +9011,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "3d gaussian", - "robotics", - "mapping", "slam", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "mapping", + "robotics", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -8975,13 +9044,13 @@ ], "github_url": "https://github.com/windrise/3DGR-CAR", "keywords": [ - "ar", - "3d reconstruction", - "fast", + "sparse view", "3d gaussian", + "3d reconstruction", "efficient", - "sparse view", - "sparse-view" + "ar", + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -9009,14 +9078,14 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", - "3d reconstruction", - "localization", "motion", "3d gaussian", - "mapping" + "gaussian splatting", + "mapping", + "3d reconstruction", + "human", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -9040,12 +9109,12 @@ ], "github_url": "https://github.com/QiZS-BIT/GSPR", "keywords": [ + "3d gaussian", "gaussian splatting", + "recognition", "ar", - "localization", - "3d gaussian", "autonomous driving", - "recognition" + "localization" ], "citations": 0, "semantic_url": "" @@ -9068,17 +9137,17 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "motion", + "high-fidelity", + "deformation", + "3d reconstruction", "human", "4d", "ar", - "3d reconstruction", - "geometry", "face", - "motion", - "3d gaussian", - "deformation", - "body", - "high-fidelity" + "body" ], "citations": 0, "semantic_url": "" @@ -9102,11 +9171,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "robotics", "efficient", - "robotics" + "ar" ], "citations": 0, "semantic_url": "" @@ -9131,15 +9200,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "high quality", - "localization", "3d gaussian", - "lightweight", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "ar", + "lightweight", + "high quality", + "localization" ], "citations": 0, "semantic_url": "" @@ -9163,12 +9232,16 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "relightable", "gaussian splatting", + "lighting", + "relighting", "ar", - "geometry", - "fast", + "shadow", "face", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -9192,12 +9265,12 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "nerf", "large scene", + "nerf", + "efficient", + "ar", "neural rendering", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -9224,13 +9297,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "localization", "motion", - "efficient", + "gaussian splatting", "tracking", - "mapping" + "mapping", + "efficient", + "ar", + "illumination", + "localization" ], "citations": 0, "semantic_url": "" @@ -9257,10 +9331,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -9284,10 +9358,10 @@ "keywords": [ "dynamic", "gaussian splatting", + "deformation", "human", "ar", "face", - "deformation", "sparse-view" ], "citations": 0, @@ -9311,10 +9385,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "segmentation", - "ar", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -9338,11 +9412,11 @@ ], "github_url": "", "keywords": [ + "motion", "gaussian splatting", + "mapping", "ar", - "fast", - "motion", - "mapping" + "fast" ], "citations": 0, "semantic_url": "" @@ -9372,11 +9446,11 @@ ], "github_url": "", "keywords": [ + "mapping", "ar", + "semantic", "fast", - "localization", - "mapping", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -9405,10 +9479,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -9435,12 +9509,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", "3d gaussian", + "gaussian splatting", + "high-fidelity", "robotics", - "high-fidelity" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -9462,11 +9536,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -9491,12 +9565,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "4d", "animation", + "gaussian splatting", "human", - "ar", + "4d", "efficient", + "ar", "body" ], "citations": 0, @@ -9523,14 +9597,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "human", + "avatar", + "efficient", "ar", "real-time rendering", - "face", - "avatar", - "3d gaussian", - "efficient" + "face" ], "citations": 0, "semantic_url": "" @@ -9556,16 +9630,16 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "segmentation", - "ar", - "localization", "3d gaussian", - "efficient", - "understanding", "slam", + "gaussian splatting", "high-fidelity", - "semantic" + "segmentation", + "understanding", + "efficient", + "ar", + "semantic", + "localization" ], "citations": 0, "semantic_url": "" @@ -9589,8 +9663,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -9619,10 +9693,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "ar", - "fast", + "robotics", "efficient", - "robotics" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -9648,9 +9722,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -9676,16 +9750,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", - "localization", "3d gaussian", + "outdoor", + "gaussian splatting", + "nerf", + "understanding", + "ar", "lightweight", "compact", - "outdoor", - "understanding" + "localization" ], "citations": 0, "semantic_url": "" @@ -9707,13 +9781,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "motion", "3d gaussian", + "gaussian splatting", + "3d reconstruction", + "robotics", "efficient", - "robotics" + "ar" ], "citations": 0, "semantic_url": "" @@ -9739,12 +9813,12 @@ "github_url": "", "keywords": [ "dynamic", - "human", - "ar", "3d reconstruction", - "neural rendering", + "human", + "understanding", "efficient", - "understanding" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -9768,14 +9842,14 @@ ], "github_url": "", "keywords": [ + "large scene", + "outdoor", "gaussian splatting", + "vr", "segmentation", "ar", - "real-time rendering", - "large scene", - "vr", - "outdoor", - "semantic" + "semantic", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -9798,10 +9872,10 @@ ], "github_url": "", "keywords": [ - "ar", - "real-time rendering", "nerf", + "ar", "lightweight", + "real-time rendering", "compact" ], "citations": 0, @@ -9826,9 +9900,12 @@ "github_url": "https://github.com/520jz/SpikeGS", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "ar", + "illumination", "real-time rendering", - "3d gaussian" + "ray marching" ], "citations": 0, "semantic_url": "" @@ -9852,15 +9929,15 @@ ], "github_url": "", "keywords": [ - "animation", - "gaussian splatting", - "ar", "geometry", - "avatar", - "deformation", "3d gaussian", "head", - "high-fidelity" + "gaussian splatting", + "animation", + "high-fidelity", + "deformation", + "avatar", + "ar" ], "citations": 0, "semantic_url": "" @@ -9885,11 +9962,11 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", + "human", "avatar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -9911,14 +9988,14 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", "nerf", "avatar", - "3d gaussian", + "efficient rendering", "efficient", - "head", - "efficient rendering" + "ar" ], "citations": 0, "semantic_url": "" @@ -9942,13 +10019,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "real-time rendering", + "3d gaussian", + "gaussian splatting", "nerf", "few-shot", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -9974,11 +10051,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "localization", - "3d gaussian", - "efficient" + "efficient", + "localization" ], "citations": 0, "semantic_url": "" @@ -10006,10 +10083,10 @@ "github_url": "", "keywords": [ "dynamic", - "ar", - "face", "motion", "high-fidelity", + "ar", + "face", "compact" ], "citations": 0, @@ -10036,9 +10113,11 @@ "github_url": "", "keywords": [ "dynamic", - "face", + "3d gaussian", + "lighting", + "relighting", "ar", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -10060,14 +10139,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "fast", - "motion", - "3d gaussian", + "ar", "neural rendering", - "mapping" + "fast" ], "citations": 0, "semantic_url": "" @@ -10091,10 +10170,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "real-time rendering", - "3d gaussian" + "gaussian splatting", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -10121,14 +10200,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "slam", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "3d gaussian", - "efficient", "robotics", - "mapping", - "slam" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -10153,12 +10232,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -10182,13 +10261,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", "gaussian splatting", - "ar", "3d reconstruction", - "face", - "3d gaussian", "robotics", - "outdoor" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -10210,11 +10289,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -10236,13 +10315,13 @@ ], "github_url": "https://github.com/kunalchelani/EdgeGaussians", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", + "gaussian splatting", + "mapping", "efficient", - "mapping" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -10268,12 +10347,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "large scene", "3d gaussian", - "high-fidelity" + "lighting", + "gaussian splatting", + "high-fidelity", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -10299,9 +10379,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -10323,10 +10403,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -10350,13 +10430,15 @@ ], "github_url": "", "keywords": [ + "path tracing", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "face", - "3d gaussian", + "ray tracing", + "acceleration", "efficient", - "acceleration" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -10380,16 +10462,17 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", - "ar", "3d gaussian", + "slam", + "lighting", + "gaussian splatting", "tracking", "mapping", + "segmentation", "understanding", - "slam", - "compact", - "semantic" + "ar", + "semantic", + "compact" ], "citations": 0, "semantic_url": "" @@ -10411,10 +10494,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -10439,10 +10522,10 @@ ], "github_url": "https://github.com/florinshen/Vista3D", "keywords": [ - "gaussian splatting", - "face", + "geometry", "ar", - "geometry" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -10467,20 +10550,20 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "animation", + "head", + "high-fidelity", + "deformation", "human", - "ar", - "geometry", - "real-time rendering", - "face", "avatar", - "motion", - "3d gaussian", - "deformation", "efficient", - "head", - "high-fidelity" + "ar", + "real-time rendering", + "face" ], "citations": 0, "semantic_url": "" @@ -10505,9 +10588,9 @@ "github_url": "https://github.com/rqhuang88/SRIF", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian", "semantic" ], "citations": 0, @@ -10530,13 +10613,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "segmentation", - "ar", "few-shot", - "3d gaussian", - "robotics" + "robotics", + "ar", + "compression" ], "citations": 0, "semantic_url": "" @@ -10566,12 +10649,12 @@ ], "github_url": "", "keywords": [ + "motion", "gaussian splatting", - "4d", "segmentation", - "ar", "nerf", - "motion", + "4d", + "ar", "autonomous driving" ], "citations": 0, @@ -10598,10 +10681,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -10627,11 +10710,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "4d", "ar", - "real-time rendering", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -10654,17 +10737,17 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", "large scene", - "localization", "3d gaussian", - "efficient", + "slam", + "gaussian splatting", "tracking", + "high-fidelity", "mapping", - "slam", - "high-fidelity" + "efficient", + "ar", + "real-time rendering", + "localization" ], "citations": 0, "semantic_url": "" @@ -10687,15 +10770,16 @@ ], "github_url": "", "keywords": [ - "ar", + "high-fidelity", "3d reconstruction", - "fast", "nerf", - "localization", - "neural rendering", "efficient", "lightweight", - "high-fidelity" + "illumination", + "ar", + "neural rendering", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -10716,11 +10800,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", + "relighting", "ar", "real-time rendering", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -10743,10 +10829,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -10773,9 +10859,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "robotics", "ar", - "robotics" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -10797,10 +10883,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", + "segmentation", "ar", - "segmentation" + "gaussian splatting", + "compression" ], "citations": 0, "semantic_url": "" @@ -10823,12 +10909,12 @@ "github_url": "https://github.com/sntubix/denser", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -10853,11 +10939,11 @@ ], "github_url": "", "keywords": [ - "human", "gaussian splatting", + "mapping", + "human", "ar", - "fast", - "mapping" + "fast" ], "citations": 0, "semantic_url": "" @@ -10882,12 +10968,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", - "ar", "3d gaussian", + "gaussian splatting", "efficient", - "compact" + "ar", + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -10909,16 +10995,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "head", - "efficient", - "vr", "slam", - "high-fidelity" - ], - "citations": 0, - "semantic_url": "" + "gaussian splatting", + "high-fidelity", + "vr", + "efficient", + "ar" + ], + "citations": 0, + "semantic_url": "" }, { "title": "A Diffusion Approach to Radiance Field Relighting using Multi-Illumination Synthesis", @@ -10940,8 +11026,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "relightable", + "lighting", + "relighting", "ar", - "3d gaussian" + "illumination" ], "citations": 0, "semantic_url": "" @@ -10963,13 +11053,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", + "motion", "gaussian splatting", "ar", "real-time rendering", - "fast", - "motion", - "3d gaussian", - "head" + "fast" ], "citations": 0, "semantic_url": "" @@ -10990,11 +11080,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "sparse view", "motion", "3d gaussian", - "sparse view" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -11020,11 +11110,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", - "ar", + "vr", "3d reconstruction", - "3d gaussian", - "vr" + "ar", + "illumination" ], "citations": 0, "semantic_url": "" @@ -11051,16 +11143,16 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "compression", - "ar", - "motion", "head", - "efficient", - "vr", + "motion", + "gaussian splatting", "tracking", - "high-fidelity" + "high-fidelity", + "vr", + "human", + "efficient", + "ar", + "compression" ], "citations": 0, "semantic_url": "" @@ -11084,11 +11176,11 @@ ], "github_url": "https://github.com/florinshen/FlashSplat", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", "ar", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -11111,10 +11203,10 @@ ], "github_url": "https://github.com/mzzcdf/Thermal3DGS", "keywords": [ - "gaussian splatting", - "ar", + "3d gaussian", "outdoor", - "3d gaussian" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -11136,9 +11228,9 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -11160,11 +11252,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", - "geometry", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -11190,12 +11282,12 @@ ], "github_url": "https://github.com/yanghb22-fdu/Hi3D-Official", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -11223,14 +11315,16 @@ ], "github_url": "", "keywords": [ + "head", + "relightable", + "lighting", "animation", "gaussian splatting", + "vr", + "efficient", "ar", "face", - "neural rendering", - "efficient", - "head", - "vr" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -11256,10 +11350,10 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", "geometry", - "efficient" + "efficient", + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -11286,11 +11380,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -11319,9 +11413,9 @@ ], "github_url": "https://github.com/nerfstudio-project/gsplat", "keywords": [ - "gaussian splatting", + "nerf", "ar", - "nerf" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -11348,11 +11442,11 @@ ], "github_url": "", "keywords": [ + "large scene", + "3d gaussian", "gaussian splatting", "ar", - "face", - "large scene", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -11375,10 +11469,10 @@ ], "github_url": "", "keywords": [ - "face", + "3d gaussian", "ar", - "high-fidelity", - "3d gaussian" + "face", + "high-fidelity" ], "citations": 0, "semantic_url": "" @@ -11401,12 +11495,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "high-fidelity", "3d reconstruction", "nerf", - "3d gaussian", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -11434,12 +11528,12 @@ "keywords": [ "dynamic", "gaussian splatting", - "ar", - "3d reconstruction", + "tracking", "deformation", + "3d reconstruction", + "understanding", "efficient", - "tracking", - "understanding" + "ar" ], "citations": 0, "semantic_url": "" @@ -11464,11 +11558,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "face", "deformation", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -11492,9 +11586,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -11522,11 +11616,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", "ar", "fast", - "nerf", - "3d gaussian", "compact" ], "citations": 0, @@ -11555,10 +11649,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "ar", - "efficient", + "high-fidelity", "mapping", - "high-fidelity" + "ar", + "efficient" ], "citations": 0, "semantic_url": "" @@ -11583,13 +11677,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "few-shot", - "face", - "3d gaussian", - "understanding" + "understanding", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -11620,12 +11714,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", "3d gaussian", + "gaussian splatting", "efficient", - "lightweight" + "lightweight", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -11648,13 +11742,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "human", "ar", - "fast", "face", - "3d gaussian", - "body" + "body", + "fast" ], "citations": 0, "semantic_url": "" @@ -11680,11 +11774,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", "fast", - "3d gaussian" + "compression" ], "citations": 0, "semantic_url": "" @@ -11705,10 +11799,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian", + "ar", "sparse-view" ], "citations": 0, @@ -11730,17 +11824,17 @@ ], "github_url": "", "keywords": [ + "sparse view", "dynamic", + "3d gaussian", + "head", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "3d gaussian", - "efficient", - "head", - "sparse view", + "understanding", "robotics", - "understanding" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -11762,10 +11856,10 @@ ], "github_url": "", "keywords": [ - "human", + "geometry", "gaussian splatting", + "human", "ar", - "geometry", "face" ], "citations": 0, @@ -11790,14 +11884,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", + "sparse view", "geometry", - "motion", "3d gaussian", - "sparse view", - "robotics" + "motion", + "gaussian splatting", + "3d reconstruction", + "robotics", + "ar" ], "citations": 0, "semantic_url": "" @@ -11820,10 +11914,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "autonomous driving", "3d gaussian", - "autonomous driving" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -11847,11 +11941,11 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "motion", "3d gaussian", - "tracking" + "motion", + "gaussian splatting", + "tracking", + "ar" ], "citations": 0, "semantic_url": "" @@ -11877,10 +11971,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "compression" ], "citations": 0, "semantic_url": "" @@ -11905,13 +11999,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "3d gaussian", + "human", + "understanding", "robotics", - "understanding" + "ar" ], "citations": 0, "semantic_url": "" @@ -11934,12 +12028,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "3d reconstruction", "3d gaussian", + "gaussian splatting", + "high-fidelity", "deformation", - "high-fidelity" + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -11960,11 +12054,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -11988,9 +12082,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", "slam", + "gaussian splatting", "high-fidelity" ], "citations": 0, @@ -12015,11 +12109,11 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", - "efficient", + "gaussian splatting", "mapping", + "efficient", + "ar", "compact" ], "citations": 0, @@ -12043,11 +12137,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", - "ar", + "deformation", "3d reconstruction", - "geometry", - "deformation" + "ar" ], "citations": 0, "semantic_url": "" @@ -12075,10 +12169,10 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian", - "sparse view", "sparse-view" ], "citations": 0, @@ -12111,10 +12205,10 @@ "keywords": [ "dynamic", "gaussian splatting", - "ar", + "high-fidelity", "urban scene", "efficient", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -12138,12 +12232,12 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", + "3d gaussian", + "high-fidelity", "nerf", "few-shot", - "3d gaussian", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -12167,11 +12261,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "nerf", - "3d gaussian", + "ar", "semantic" ], "citations": 0, @@ -12195,11 +12289,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", + "tracking", "ar", - "geometry", - "fast", - "tracking" + "fast" ], "citations": 0, "semantic_url": "" @@ -12225,10 +12319,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "autonomous driving", "ar", - "3d gaussian", - "autonomous driving", "neural rendering" ], "citations": 0, @@ -12255,15 +12349,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", - "nerf", "3d gaussian", - "autonomous driving", + "gaussian splatting", "vr", "survey", - "robotics" + "3d reconstruction", + "nerf", + "robotics", + "ar", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -12296,12 +12390,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "high-fidelity", "mapping", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -12325,10 +12419,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", "efficient", + "ar", "compact" ], "citations": 0, @@ -12352,12 +12446,12 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", + "human", "avatar", - "3d gaussian", "efficient", + "ar", "semantic" ], "citations": 0, @@ -12383,13 +12477,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "high-fidelity", "4d", "ar", "real-time rendering", - "face", - "3d gaussian", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -12412,11 +12506,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -12490,9 +12584,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "survey", - "3d gaussian" + "survey" ], "citations": 0, "semantic_url": "" @@ -12515,11 +12609,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "3d gaussian", + "ar", "sparse-view" ], "citations": 0, @@ -12546,9 +12640,9 @@ ], "github_url": "https://github.com/liwrui/SceneDreamer360", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -12573,10 +12667,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "relightable", "gaussian splatting", + "lighting", + "relighting", "ar", - "face", - "3d gaussian" + "illumination", + "face" ], "citations": 0, "semantic_url": "" @@ -12602,9 +12700,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "semantic", - "3d gaussian" + "semantic" ], "citations": 0, "semantic_url": "" @@ -12626,12 +12724,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "lighting", "gaussian splatting", "ar", - "geometry", "real-time rendering", "face", - "3d gaussian" + "reflection" ], "citations": 0, "semantic_url": "" @@ -12655,10 +12755,10 @@ ], "github_url": "", "keywords": [ - "ar", - "high-fidelity", "3d gaussian", - "efficient" + "efficient", + "ar", + "high-fidelity" ], "citations": 0, "semantic_url": "" @@ -12685,14 +12785,14 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", - "4d", + "deformation", "segmentation", "3d reconstruction", - "ar", - "motion", - "deformation", - "efficient" + "4d", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -12714,11 +12814,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", "real-time rendering", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -12738,12 +12838,12 @@ ], "github_url": "https://github.com/goldoak/GSFusion", "keywords": [ - "gaussian splatting", - "ar", - "high quality", "3d gaussian", + "gaussian splatting", "mapping", "robotics", + "ar", + "high quality", "compact" ], "citations": 0, @@ -12768,12 +12868,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "lighting", "3d reconstruction", + "relighting", + "efficient", + "light transport", + "shadow", "face", - "3d gaussian", - "efficient" + "ar" ], "citations": 0, "semantic_url": "" @@ -12796,10 +12900,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -12822,8 +12926,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -12848,10 +12952,10 @@ "github_url": "https://github.com/GANWANSHUI/GaussianOcc.git", "keywords": [ "gaussian splatting", - "ar", - "fast", "efficient", - "semantic" + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -12873,10 +12977,10 @@ ], "github_url": "https://github.com/TrickyGo/Pano2Room", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -12902,13 +13006,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", "gaussian splatting", - "ar", "nerf", - "localization", - "3d gaussian", "efficient", - "outdoor" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -12933,9 +13037,9 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -12961,11 +13065,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "3d gaussian", - "understanding" + "understanding", + "ar" ], "citations": 0, "semantic_url": "" @@ -12989,13 +13093,13 @@ ], "github_url": "", "keywords": [ - "human", - "ar", - "3d reconstruction", "geometry", - "nerf", "3d gaussian", + "3d reconstruction", + "nerf", + "human", "understanding", + "ar", "semantic" ], "citations": 0, @@ -13025,15 +13129,15 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", + "motion", "gaussian splatting", + "avatar", "ar", "face", - "avatar", - "motion", - "3d gaussian", - "neural rendering", - "head", - "body" + "body", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -13058,12 +13162,12 @@ ], "github_url": "", "keywords": [ - "ar", - "localization", "3d gaussian", + "slam", "tracking", "mapping", - "slam" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -13083,11 +13187,12 @@ ], "github_url": "", "keywords": [ + "lighting", "gaussian splatting", - "compression", - "ar", "efficient", - "compact" + "ar", + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -13110,19 +13215,19 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", + "deformation", "human", - "ar", - "geometry", - "real-time rendering", - "fast", "avatar", - "3d gaussian", - "deformation", "efficient", + "ar", + "real-time rendering", "body", - "semantic" + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -13144,15 +13249,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "human", - "ar", - "geometry", + "deformation", "nerf", "avatar", - "3d gaussian", - "deformation" + "human", + "ar" ], "citations": 0, "semantic_url": "" @@ -13175,9 +13280,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "shadow", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -13203,11 +13309,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", "ar", - "face", - "motion", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -13229,10 +13335,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "lighting", "gaussian splatting", - "face", + "relighting", "ar", - "geometry" + "illumination", + "face" ], "citations": 0, "semantic_url": "" @@ -13255,14 +13364,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "vr", + "nerf", "ar", - "fast", "real-time rendering", - "nerf", - "geometry", - "3d gaussian", - "vr" + "fast" ], "citations": 0, "semantic_url": "" @@ -13290,11 +13399,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "acceleration", "efficient", - "acceleration" + "ar" ], "citations": 0, "semantic_url": "" @@ -13315,6 +13424,8 @@ ], "github_url": "", "keywords": [ + "relighting", + "lighting", "gaussian splatting", "ar" ], @@ -13341,10 +13452,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "deformation", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "deformation" ], "citations": 0, "semantic_url": "" @@ -13367,13 +13478,13 @@ ], "github_url": "", "keywords": [ - "segmentation", - "ar", "geometry", - "real-time rendering", + "segmentation", "nerf", "understanding", - "semantic" + "ar", + "semantic", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -13397,12 +13508,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", "nerf", - "3d gaussian", "understanding", + "ar", "semantic" ], "citations": 0, @@ -13427,14 +13538,14 @@ ], "github_url": "", "keywords": [ + "sparse view", "dynamic", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", "nerf", - "face", - "sparse view", - "mapping" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -13459,10 +13570,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -13488,16 +13599,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "motion", "3d gaussian", - "efficient", - "vr", + "gaussian splatting", "tracking", + "high-fidelity", + "vr", + "3d reconstruction", "robotics", - "high-fidelity" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -13528,14 +13639,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "head", "animation", - "ar", - "fast", + "gaussian splatting", + "high-fidelity", "few-shot", "avatar", - "head", - "high-fidelity" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -13558,14 +13669,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "slam", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", + "ar", "real-time rendering", - "face", - "3d gaussian", - "slam" + "face" ], "citations": 0, "semantic_url": "" @@ -13589,11 +13700,15 @@ ], "github_url": "https://github.com/zhanglbthu/PRTGaussian", "keywords": [ - "ar", "geometry", - "fast", "3d gaussian", - "efficient" + "relightable", + "lighting", + "relighting", + "efficient", + "light transport", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -13615,11 +13730,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -13641,11 +13756,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", "nerf", + "ar", "sparse-view" ], "citations": 0, @@ -13671,12 +13786,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "face", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -13699,10 +13814,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -13725,14 +13840,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "motion", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -13756,13 +13871,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "compression", "ar", "real-time rendering", "fast", - "3d gaussian", - "compact" + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -13782,11 +13897,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "high quality", + "illumination", "real-time rendering", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -13813,9 +13929,15 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "lighting", + "ray tracing", + "relighting", "ar", - "3d gaussian" + "shadow", + "illumination", + "reflection" ], "citations": 0, "semantic_url": "" @@ -13837,12 +13959,12 @@ ], "github_url": "", "keywords": [ - "segmentation", - "ar", "3d gaussian", - "autonomous driving", + "segmentation", "understanding", - "semantic" + "ar", + "semantic", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -13864,14 +13986,17 @@ ], "github_url": "", "keywords": [ + "geometry", + "outdoor", "gaussian splatting", - "ar", + "lighting", + "high-fidelity", "3d reconstruction", - "fast", "nerf", - "geometry", - "outdoor", - "high-fidelity" + "relighting", + "ar", + "shadow", + "fast" ], "citations": 0, "semantic_url": "" @@ -13892,9 +14017,9 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", - "semantic", - "3d gaussian" + "semantic" ], "citations": 0, "semantic_url": "" @@ -13918,8 +14043,8 @@ "github_url": "https://github.com/uhhhci/RealityFusion", "keywords": [ "vr", - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -13940,14 +14065,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", - "fast", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "3d reconstruction", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -13979,15 +14104,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "animation", - "4d", - "ar", "geometry", "motion", + "dynamic", "head", + "animation", + "high-fidelity", "mapping", - "high-fidelity" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -14011,12 +14136,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", "3d gaussian", - "sparse-view", - "outdoor" + "outdoor", + "gaussian splatting", + "ar", + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -14041,11 +14166,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "efficient" + "gaussian splatting", + "lighting", + "efficient", + "light transport", + "illumination", + "shadow", + "ar", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -14066,14 +14196,14 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "animation", "gaussian splatting", "human", + "avatar", "ar", "face", - "avatar", - "motion", - "3d gaussian", "body" ], "citations": 0, @@ -14119,11 +14249,11 @@ ], "github_url": "", "keywords": [ - "segmentation", - "ar", "3d gaussian", - "efficient", + "segmentation", "understanding", + "efficient", + "ar", "semantic" ], "citations": 0, @@ -14174,14 +14304,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "head", "3d gaussian", + "gaussian splatting", + "high-fidelity", "vr", + "nerf", "robotics", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -14206,13 +14336,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "3d gaussian", - "efficient", + "gaussian splatting", "vr", - "robotics" + "robotics", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -14237,8 +14367,8 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -14264,13 +14394,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", "3d gaussian", - "efficient", + "gaussian splatting", "survey", - "understanding" + "understanding", + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -14295,10 +14425,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "face", + "neural rendering", "ar", - "neural rendering" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -14321,13 +14451,15 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "fast", - "nerf", "3d gaussian", + "gaussian splatting", + "high-fidelity", "mapping", - "high-fidelity" + "nerf", + "ar", + "shadow", + "illumination", + "fast" ], "citations": 0, "semantic_url": "" @@ -14347,9 +14479,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -14372,11 +14504,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "face", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -14403,12 +14535,12 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -14432,9 +14564,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -14456,15 +14588,15 @@ ], "github_url": "", "keywords": [ - "human", - "ar", - "few-shot", - "face", - "avatar", - "3d gaussian", "head", + "3d gaussian", + "high-fidelity", "vr", - "high-fidelity" + "few-shot", + "avatar", + "human", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -14487,12 +14619,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "3d gaussian", + "gaussian splatting", "tracking", - "high-fidelity" + "high-fidelity", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -14515,9 +14647,9 @@ ], "github_url": "https://github.com/Qi-Yangsjtu/GGSC", "keywords": [ + "ar", "gaussian splatting", - "compression", - "ar" + "compression" ], "citations": 0, "semantic_url": "" @@ -14538,12 +14670,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "segmentation", "understanding", + "ar", "semantic" ], "citations": 0, @@ -14570,15 +14702,15 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "real-time rendering", "nerf", - "3d gaussian", + "human", "efficient", - "mapping" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -14606,10 +14738,10 @@ ], "github_url": "https://github.com/HansenHuang0823/PlacidDreamer", "keywords": [ - "ar", - "geometry", "fast", - "3d gaussian" + "3d gaussian", + "ar", + "geometry" ], "citations": 0, "semantic_url": "" @@ -14631,9 +14763,9 @@ ], "github_url": "https://github.com/LMozart/ECCV2024-GCS-BEG", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -14656,13 +14788,13 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "face", - "motion", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -14693,15 +14825,15 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", + "sparse view", "geometry", - "nerf", "3d gaussian", - "neural rendering", + "gaussian splatting", "vr", - "sparse view" + "nerf", + "human", + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -14722,13 +14854,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", + "efficient", "ar", - "fast", "real-time rendering", - "nerf", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -14754,11 +14886,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "compression", - "ar", "3d reconstruction", - "3d gaussian" + "ar", + "compression" ], "citations": 0, "semantic_url": "" @@ -14783,12 +14915,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", "ar", "real-time rendering", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -14815,11 +14947,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -14842,10 +14974,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "motion", "slam", - "motion" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -14868,12 +15000,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", + "lighting", "gaussian splatting", + "efficient", "ar", - "fast", - "motion", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -14897,11 +15030,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", "ar", - "geometry", - "motion", "semantic" ], "citations": 0, @@ -14926,17 +15059,17 @@ ], "github_url": "", "keywords": [ - "animation", + "geometry", + "3d gaussian", "gaussian splatting", + "animation", + "3d reconstruction", "human", + "avatar", "ar", - "3d reconstruction", - "geometry", - "fast", "face", - "avatar", - "3d gaussian", - "body" + "body", + "fast" ], "citations": 0, "semantic_url": "" @@ -14985,15 +15118,19 @@ ], "github_url": "", "keywords": [ - "animation", + "relightable", + "lighting", "gaussian splatting", + "animation", "human", - "ar", - "fast", + "ray tracing", "avatar", "efficient", + "shadow", + "ar", + "body", "sparse-view", - "body" + "fast" ], "citations": 0, "semantic_url": "" @@ -15016,9 +15153,9 @@ "github_url": "https://github.com/lsztzp/Pathformer3D", "keywords": [ "human", - "tracking", + "3d gaussian", "ar", - "3d gaussian" + "tracking" ], "citations": 0, "semantic_url": "" @@ -15042,8 +15179,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "illumination", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -15067,10 +15205,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -15094,15 +15232,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "high quality", "nerf", - "motion", - "3d gaussian", + "ar", "lightweight", - "mapping" + "high quality" ], "citations": 0, "semantic_url": "" @@ -15123,10 +15261,10 @@ ], "github_url": "https://github.com/ZhentaoHuang/Textured-GS", "keywords": [ + "efficient", "gaussian splatting", "face", - "ar", - "efficient" + "ar" ], "citations": 0, "semantic_url": "" @@ -15148,11 +15286,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", "efficient", - "lightweight" + "lightweight", + "ar" ], "citations": 0, "semantic_url": "" @@ -15176,11 +15314,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian" + "ar", + "illumination", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -15202,12 +15341,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "lighting", + "survey", "3d reconstruction", "nerf", - "3d gaussian", - "survey" + "ar" ], "citations": 0, "semantic_url": "" @@ -15228,13 +15368,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "gaussian splatting", + "deformation", "human", - "ar", "avatar", - "deformation", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -15256,12 +15396,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "real-time rendering", + "3d gaussian", + "gaussian splatting", "nerf", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -15289,12 +15429,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", + "robotics", + "ray tracing", + "efficient", + "shadow", "ar", "fast", - "3d gaussian", - "efficient", - "robotics" + "reflection" ], "citations": 0, "semantic_url": "" @@ -15318,18 +15462,18 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "animation", + "high-fidelity", + "deformation", "human", - "ar", "avatar", - "motion", - "3d gaussian", - "neural rendering", - "deformation", "efficient", + "ar", "body", - "high-fidelity" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -15352,11 +15496,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -15404,14 +15548,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "real-time rendering", - "nerf", "3d gaussian", + "gaussian splatting", + "high-fidelity", "deformation", "mapping", - "high-fidelity" + "nerf", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -15434,14 +15578,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "gaussian splatting", "human", - "ar", - "real-time rendering", "avatar", - "3d gaussian", "efficient", - "head" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -15468,14 +15612,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "4d", + "vr", "segmentation", - "ar", - "3d gaussian", + "understanding", + "4d", "efficient", - "vr", - "understanding" + "ar" ], "citations": 0, "semantic_url": "" @@ -15504,11 +15648,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -15532,11 +15676,11 @@ ], "github_url": "", "keywords": [ + "motion", "gaussian splatting", + "nerf", "ar", "real-time rendering", - "nerf", - "motion", "body" ], "citations": 0, @@ -15559,10 +15703,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "high-fidelity", - "3d gaussian" + "gaussian splatting", + "high-fidelity" ], "citations": 0, "semantic_url": "" @@ -15584,11 +15728,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -15613,10 +15757,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "human", - "ar", "avatar", - "3d gaussian", + "ar", "body" ], "citations": 0, @@ -15640,11 +15784,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "urban scene", + "ar", "face", - "3d gaussian", "neural rendering" ], "citations": 0, @@ -15670,12 +15814,12 @@ ], "github_url": "https://github.com/wrld/Free-SurGS", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", + "3d gaussian", "motion", - "3d gaussian" + "gaussian splatting", + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -15702,9 +15846,9 @@ ], "github_url": "", "keywords": [ - "ar", + "3d gaussian", "high quality", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -15730,13 +15874,13 @@ ], "github_url": "", "keywords": [ + "sparse view", "dynamic", + "3d gaussian", "gaussian splatting", "ar", "real-time rendering", - "3d gaussian", - "autonomous driving", - "sparse view" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -15761,10 +15905,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "semantic", - "3d gaussian" + "gaussian splatting", + "semantic" ], "citations": 0, "semantic_url": "" @@ -15785,9 +15929,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "nerf", "ar", - "nerf" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -15812,10 +15956,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "neural rendering", "3d gaussian", - "neural rendering" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -15842,12 +15986,12 @@ "github_url": "", "keywords": [ "segmentation", - "ar", - "fast", "nerf", - "efficient", "understanding", - "semantic" + "efficient", + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -15870,11 +16014,11 @@ ], "github_url": "", "keywords": [ - "ar", "3d gaussian", + "medical", "efficient", - "sparse-view", - "medical" + "ar", + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -15899,13 +16043,13 @@ ], "github_url": "", "keywords": [ + "sparse view", + "geometry", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "neural rendering", - "sparse view", - "sparse-view" + "ar", + "sparse-view", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -15927,12 +16071,12 @@ ], "github_url": "https://github.com/horizon-research/Fov-3DGS", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "vr", + "human", "ar", - "3d gaussian", - "neural rendering", - "vr" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -15955,15 +16099,15 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "geometry", - "motion", "3d gaussian", + "motion", + "gaussian splatting", + "high-fidelity", + "human", "efficient", - "body", - "high-fidelity" + "ar", + "body" ], "citations": 0, "semantic_url": "" @@ -15991,11 +16135,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -16022,16 +16167,16 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "human", - "ar", - "high quality", - "motion", "3d gaussian", - "efficient", + "motion", "head", + "gaussian splatting", "tracking", - "understanding" + "human", + "understanding", + "efficient", + "ar", + "high quality" ], "citations": 0, "semantic_url": "" @@ -16059,10 +16204,10 @@ ], "github_url": "", "keywords": [ - "ar", "3d gaussian", "efficient", "lightweight", + "ar", "compact" ], "citations": 0, @@ -16088,12 +16233,12 @@ ], "github_url": "", "keywords": [ - "human", - "ar", - "avatar", "head", "3d gaussian", - "high-fidelity" + "high-fidelity", + "human", + "avatar", + "ar" ], "citations": 0, "semantic_url": "" @@ -16118,14 +16263,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", + "tracking", "4d", - "ar", - "geometry", - "motion", "efficient", - "tracking" + "ar" ], "citations": 0, "semantic_url": "" @@ -16152,10 +16297,10 @@ "github_url": "https://github.com/nyu-systems/Grendel-GS", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -16184,11 +16329,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "animation", "gaussian splatting", - "ar", - "geometry", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -16210,11 +16355,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", - "3d gaussian", - "efficient" + "efficient", + "compression" ], "citations": 0, "semantic_url": "" @@ -16239,13 +16384,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "lighting", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", + "ar", "real-time rendering", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -16274,10 +16420,10 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", "ar", - "fast", - "motion" + "fast" ], "citations": 0, "semantic_url": "" @@ -16303,8 +16449,8 @@ ], "github_url": "", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -16324,12 +16470,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "3d gaussian", + "gaussian splatting", "vr", - "robotics" + "nerf", + "robotics", + "ar" ], "citations": 0, "semantic_url": "" @@ -16352,12 +16498,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", "real-time rendering", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -16385,13 +16531,13 @@ ], "github_url": "https://github.com/Xiaohao-Xu/SLAM-under-Perturbation", "keywords": [ - "gaussian splatting", - "ar", - "nerf", - "localization", "motion", + "slam", + "gaussian splatting", "mapping", - "slam" + "nerf", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -16416,14 +16562,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "gaussian splatting", + "high-fidelity", "human", - "ar", "avatar", - "3d gaussian", - "body", - "high-fidelity" + "ar", + "body" ], "citations": 0, "semantic_url": "" @@ -16447,15 +16593,16 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "gaussian splatting", + "deformation", "4d", - "compression", + "efficient", + "lightweight", "ar", "real-time rendering", - "3d gaussian", - "deformation", - "efficient", - "lightweight" + "compression" ], "citations": 0, "semantic_url": "" @@ -16480,13 +16627,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", + "efficient", "ar", - "high quality", "fast", - "3d gaussian", - "efficient", - "high-fidelity" + "high quality" ], "citations": 0, "semantic_url": "" @@ -16510,10 +16657,10 @@ ], "github_url": "", "keywords": [ - "lightweight", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "lightweight" ], "citations": 0, "semantic_url": "" @@ -16541,12 +16688,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", + "outdoor", + "gaussian splatting", "vr", "understanding", - "outdoor" + "ar" ], "citations": 0, "semantic_url": "" @@ -16569,12 +16716,12 @@ "github_url": "https://github.com/deguchihiroyuki/E2GS", "keywords": [ "dynamic", + "motion", "gaussian splatting", - "ar", "3d reconstruction", - "fast", "nerf", - "motion" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -16601,12 +16748,12 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", - "face", + "3d gaussian", "motion", - "3d gaussian" + "dynamic", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -16630,11 +16777,11 @@ ], "github_url": "", "keywords": [ - "segmentation", - "ar", - "motion", "3d gaussian", - "tracking" + "motion", + "tracking", + "segmentation", + "ar" ], "citations": 0, "semantic_url": "" @@ -16657,11 +16804,11 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "nerf", "3d gaussian", - "efficient" + "nerf", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -16688,11 +16835,11 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", "high-fidelity", + "human", + "ar", "semantic" ], "citations": 0, @@ -16718,14 +16865,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", - "fast", "large scene", "3d gaussian", + "gaussian splatting", + "efficient rendering", "efficient", - "efficient rendering" + "ar", + "real-time rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -16750,9 +16897,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -16776,12 +16923,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "real-time rendering", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -16804,10 +16951,11 @@ ], "github_url": "", "keywords": [ - "face", + "3d gaussian", + "ray casting", "avatar", "ar", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -16832,14 +16980,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", - "ar", - "3d gaussian", - "efficient", "head", + "3d gaussian", + "gaussian splatting", "survey", - "compact" + "efficient", + "ar", + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -16861,13 +17009,13 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "dynamic", + "gaussian splatting", "tracking", - "robotics" + "robotics", + "ar" ], "citations": 0, "semantic_url": "" @@ -16889,12 +17037,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", - "geometry", "nerf", - "3d gaussian" + "ar", + "illumination", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -16919,12 +17069,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", + "nerf", "ar", "real-time rendering", - "nerf", - "3d gaussian" + "compression" ], "citations": 0, "semantic_url": "" @@ -16954,12 +17104,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "gaussian splatting", "4d", "ar", - "high quality", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -16981,11 +17131,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -17005,10 +17155,10 @@ ], "github_url": "https://github.com/trapoom555/GradeADreamer", "keywords": [ - "gaussian splatting", - "face", + "geometry", "ar", - "geometry" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -17029,12 +17179,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", + "path tracing", "3d gaussian", + "gaussian splatting", + "lighting", + "efficient rendering", "efficient", - "efficient rendering" + "illumination", + "face", + "ar", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -17059,10 +17213,10 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", + "3d gaussian", "motion", - "3d gaussian" + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -17085,13 +17239,13 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", - "fast", + "head", "3d gaussian", + "gaussian splatting", + "human", "efficient", - "head" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -17116,10 +17270,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "gaussian splatting", "nerf", + "ar", "face" ], "citations": 0, @@ -17144,11 +17298,12 @@ ], "github_url": "https://github.com/Xian-Bei/GaussianForest", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", + "ray tracing", + "efficient", "ar", - "3d gaussian", - "efficient" + "compression" ], "citations": 0, "semantic_url": "" @@ -17174,11 +17329,11 @@ ], "github_url": "", "keywords": [ - "ar", - "high quality", - "nerf", "3d gaussian", - "semantic" + "nerf", + "ar", + "semantic", + "high quality" ], "citations": 0, "semantic_url": "" @@ -17200,13 +17355,13 @@ ], "github_url": "", "keywords": [ - "human", - "ar", - "3d reconstruction", "geometry", - "avatar", "3d gaussian", - "high-fidelity" + "high-fidelity", + "3d reconstruction", + "human", + "avatar", + "ar" ], "citations": 0, "semantic_url": "" @@ -17230,10 +17385,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -17257,10 +17412,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -17288,11 +17443,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "mapping", "ar", - "3d gaussian", - "lightweight", - "mapping" + "lightweight" ], "citations": 0, "semantic_url": "" @@ -17314,14 +17469,14 @@ ], "github_url": "", "keywords": [ - "human", + "head", + "3d gaussian", "gaussian splatting", - "ar", + "vr", "nerf", - "3d gaussian", + "human", "efficient", - "head", - "vr", + "ar", "compact" ], "citations": 0, @@ -17350,15 +17505,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", "gaussian splatting", - "ar", - "fast", + "high-fidelity", "nerf", - "face", - "3d gaussian", "efficient", - "outdoor", - "high-fidelity" + "ar", + "illumination", + "face", + "fast" ], "citations": 0, "semantic_url": "" @@ -17385,10 +17541,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "lightweight", + "3d gaussian", "ar", - "3d gaussian" + "lightweight", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -17413,14 +17569,15 @@ ], "github_url": "https://github.com/Srameo/LE3D", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", + "lighting", + "mapping", + "nerf", "ar", - "fast", "real-time rendering", - "nerf", - "motion", - "3d gaussian", - "mapping" + "fast" ], "citations": 0, "semantic_url": "" @@ -17442,11 +17599,11 @@ ], "github_url": "", "keywords": [ - "compression", + "3d gaussian", + "vr", "ar", "high quality", - "3d gaussian", - "vr" + "compression" ], "citations": 0, "semantic_url": "" @@ -17469,12 +17626,12 @@ "github_url": "", "keywords": [ "dynamic", - "segmentation", - "ar", - "motion", "3d gaussian", + "motion", + "segmentation", "efficient", - "lightweight" + "lightweight", + "ar" ], "citations": 0, "semantic_url": "" @@ -17499,11 +17656,11 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "face", "motion", - "deformation" + "deformation", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -17530,10 +17687,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian", - "neural rendering" + "neural rendering", + "reflection" ], "citations": 0, "semantic_url": "" @@ -17557,12 +17715,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "fast", "face", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -17587,10 +17745,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -17617,11 +17775,11 @@ "github_url": "", "keywords": [ "dynamic", - "ar", - "face", "3d gaussian", + "high-fidelity", "understanding", - "high-fidelity" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -17643,14 +17801,15 @@ ], "github_url": "", "keywords": [ - "animation", + "3d gaussian", "gaussian splatting", + "animation", + "survey", "human", - "ar", "avatar", - "3d gaussian", - "survey", - "body" + "ar", + "body", + "reflection" ], "citations": 0, "semantic_url": "" @@ -17675,13 +17834,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", "4d", - "ar", - "geometry", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -17703,13 +17862,13 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "real-time rendering", - "nerf", "3d gaussian", + "gaussian splatting", + "high-fidelity", "deformation", - "high-fidelity" + "nerf", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -17735,14 +17894,15 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "deformation", "3d gaussian", "head", + "medical", + "lighting", + "gaussian splatting", "tracking", - "compact", - "medical" + "deformation", + "ar", + "compact" ], "citations": 0, "semantic_url": "" @@ -17766,14 +17926,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "4d", - "ar", "geometry", - "motion", "3d gaussian", + "dynamic", + "motion", + "lighting", "deformation", + "4d", "efficient", + "ar", "compact" ], "citations": 0, @@ -17799,13 +17960,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "motion", "3d gaussian", + "gaussian splatting", + "high-fidelity", + "3d reconstruction", "robotics", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -17825,11 +17986,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "ray casting", + "efficient", "ar", - "fast", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -17852,13 +18014,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", "nerf", - "3d gaussian", - "neural rendering" + "ar", + "neural rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -17883,11 +18045,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -17909,11 +18071,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "recognition", "mapping", - "recognition" + "ar" ], "citations": 0, "semantic_url": "" @@ -17938,10 +18100,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -17962,11 +18124,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "lighting", + "gaussian splatting", "efficient", - "lightweight" + "lightweight", + "ar" ], "citations": 0, "semantic_url": "" @@ -17992,11 +18155,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", "efficient", + "ar", "semantic" ], "citations": 0, @@ -18027,10 +18190,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "understanding", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -18055,15 +18218,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "mapping", "segmentation", + "understanding", "ar", - "high quality", + "semantic", "fast", - "3d gaussian", - "mapping", - "understanding", - "semantic" + "high quality" ], "citations": 0, "semantic_url": "" @@ -18086,13 +18249,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "deformation", "3d reconstruction", "nerf", - "motion", - "3d gaussian", - "deformation" + "ar" ], "citations": 0, "semantic_url": "" @@ -18115,13 +18278,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", + "ar", "face", - "3d gaussian", - "head" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -18147,11 +18310,11 @@ "github_url": "https://github.com/tyhuang0428/DreamPhysics", "keywords": [ "dynamic", - "4d", - "ar", + "3d gaussian", "motion", "deformation", - "3d gaussian" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -18176,11 +18339,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "real-time rendering", "face", - "3d gaussian", + "real-time rendering", "shape reconstruction" ], "citations": 0, @@ -18204,10 +18367,10 @@ "keywords": [ "dynamic", "gaussian splatting", - "4d", - "ar", + "high-fidelity", "nerf", - "high-fidelity" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -18233,9 +18396,9 @@ ], "github_url": "", "keywords": [ - "ar", + "nerf", "high quality", - "nerf" + "ar" ], "citations": 0, "semantic_url": "" @@ -18260,15 +18423,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "head", "gaussian splatting", + "high-fidelity", "4d", "ar", - "geometry", - "face", - "3d gaussian", - "head", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -18293,12 +18456,12 @@ ], "github_url": "", "keywords": [ + "geometry", "dynamic", "gaussian splatting", - "ar", - "geometry", + "deformation", "nerf", - "deformation" + "ar" ], "citations": 0, "semantic_url": "" @@ -18320,8 +18483,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "relightable", + "lighting", + "relighting", "ar", - "3d gaussian" + "illumination" ], "citations": 0, "semantic_url": "" @@ -18346,12 +18513,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", "fast", - "3d gaussian", - "compact" + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -18376,13 +18543,13 @@ ], "github_url": "https://github.com/Ruyi-Zha/r2_gaussian", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", + "ar", "face", - "3d gaussian", - "sparse-view" + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -18407,9 +18574,9 @@ ], "github_url": "", "keywords": [ - "ar", "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -18438,15 +18605,15 @@ "github_url": "https://github.com/nnanhuang/S3Gaussian/", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "4d", - "ar", "3d reconstruction", - "fast", "nerf", - "3d gaussian", + "4d", "efficient", + "ar", "autonomous driving", + "fast", "compact" ], "citations": 0, @@ -18468,12 +18635,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", - "lightweight" + "ar", + "lightweight", + "fast" ], "citations": 0, "semantic_url": "" @@ -18496,15 +18663,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", - "ar", + "tracking", "3d reconstruction", - "geometry", - "motion", - "3d gaussian", - "tracking", - "robotics" + "robotics", + "ar" ], "citations": 0, "semantic_url": "" @@ -18527,13 +18694,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", "4d", - "ar", - "geometry", - "motion", "efficient", + "ar", "semantic" ], "citations": 0, @@ -18562,14 +18729,14 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", + "3d gaussian", "motion", + "dynamic", "deformation", - "3d gaussian", + "robotics", "efficient", - "robotics" + "ar" ], "citations": 0, "semantic_url": "" @@ -18594,11 +18761,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", "face", - "3d gaussian", "neural rendering" ], "citations": 0, @@ -18624,10 +18791,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "sparse-view", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -18651,15 +18818,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "motion", "3d gaussian", + "slam", + "gaussian splatting", "tracking", - "robotics", "mapping", - "slam" + "robotics", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -18685,18 +18852,18 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "gaussian splatting", "animation", + "high-fidelity", + "deformation", "human", - "ar", - "real-time rendering", "avatar", - "3d gaussian", - "deformation", - "efficient", "efficient rendering", - "head", - "high-fidelity" + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -18719,13 +18886,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "tracking", "segmentation", "ar", - "fast", + "semantic", "face", - "3d gaussian", - "tracking", - "semantic" + "fast" ], "citations": 0, "semantic_url": "" @@ -18748,13 +18915,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", - "ar", - "face", - "avatar", "deformation", - "3d gaussian", + "avatar", "efficient", + "ar", + "face", "body" ], "citations": 0, @@ -18780,12 +18947,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "high quality", "fast", - "3d gaussian", - "efficient" + "high quality" ], "citations": 0, "semantic_url": "" @@ -18812,11 +18979,11 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "motion", "3d gaussian", - "high-fidelity" + "motion", + "gaussian splatting", + "high-fidelity", + "ar" ], "citations": 0, "semantic_url": "" @@ -18841,11 +19008,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "tracking", "4d", - "ar", - "3d gaussian", - "tracking" + "ar" ], "citations": 0, "semantic_url": "" @@ -18870,10 +19037,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "ar", "semantic" ], "citations": 0, @@ -18897,10 +19064,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "autonomous driving", "ar", - "semantic", - "autonomous driving" + "gaussian splatting", + "semantic" ], "citations": 0, "semantic_url": "" @@ -18923,8 +19090,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar" + "ar", + "shadow", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -18946,10 +19114,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -18975,14 +19143,14 @@ ], "github_url": "https://github.com/jasongzy/EG4D", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", "4d", + "efficient", "ar", - "geometry", "face", - "motion", - "efficient", "semantic" ], "citations": 0, @@ -19005,10 +19173,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", "ar", - "3d gaussian", "semantic" ], "citations": 0, @@ -19031,11 +19199,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "large scene", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -19057,12 +19225,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "mapping", "nerf", - "motion", - "3d gaussian", - "mapping" + "ar" ], "citations": 0, "semantic_url": "" @@ -19086,14 +19254,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "deformation", + "nerf", "4d", "ar", - "nerf", - "motion", - "3d gaussian", - "neural rendering", - "deformation" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -19118,15 +19286,15 @@ "github_url": "https://github.com/jinlab-imvr/Deform3DGS", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "vr", + "deformation", + "efficient", "ar", "real-time rendering", - "fast", "face", - "3d gaussian", - "deformation", - "efficient", - "vr" + "fast" ], "citations": 0, "semantic_url": "" @@ -19154,14 +19322,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "nerf", "3d gaussian", + "gaussian splatting", + "high-fidelity", "deformation", + "nerf", + "ar", "body", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -19182,10 +19350,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "compression" ], "citations": 0, "semantic_url": "" @@ -19211,11 +19379,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "neural rendering" + "lighting", + "gaussian splatting", + "ar", + "illumination", + "neural rendering", + "reflection" ], "citations": 0, "semantic_url": "" @@ -19240,16 +19411,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "segmentation", - "ar", - "localization", - "3d gaussian", - "efficient", "understanding", + "efficient", + "localization", + "ar", + "semantic", "compact", - "semantic" + "compression" ], "citations": 0, "semantic_url": "" @@ -19273,12 +19444,12 @@ ], "github_url": "https://github.com/huang-yh/GaussianFormer", "keywords": [ - "ar", "geometry", "3d gaussian", - "autonomous driving", "efficient", - "semantic" + "ar", + "semantic", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -19302,13 +19473,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", "dynamic", "gaussian splatting", + "deformation", "4d", "ar", - "geometry", - "motion", - "deformation", "compact" ], "citations": 0, @@ -19331,10 +19502,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "ar", - "real-time rendering", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -19362,16 +19533,16 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "mapping", "segmentation", - "ar", "3d reconstruction", - "localization", - "3d gaussian", + "human", + "robotics", + "ar", "neural rendering", - "mapping", - "robotics" + "localization" ], "citations": 0, "semantic_url": "" @@ -19395,13 +19566,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", "efficient", - "neural rendering", - "head" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -19425,11 +19596,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "large scene", "3d gaussian", + "large scene", + "gaussian splatting", + "ar", "semantic" ], "citations": 0, @@ -19457,11 +19628,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "human", "4d", "ar", - "motion", - "3d gaussian", "shape reconstruction" ], "citations": 0, @@ -19483,13 +19654,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "high-fidelity", + "nerf", "ar", "fast", - "nerf", - "3d gaussian", - "compact", - "high-fidelity" + "compact" ], "citations": 0, "semantic_url": "" @@ -19515,15 +19686,15 @@ ], "github_url": "", "keywords": [ + "geometry", "dynamic", + "motion", "gaussian splatting", + "high-fidelity", "4d", - "ar", - "geometry", - "fast", - "motion", "efficient", - "high-fidelity" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -19548,19 +19719,19 @@ ], "github_url": "https://github.com/eriksandstroem/Splat-SLAM", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", - "fast", - "localization", "3d gaussian", - "efficient", + "dynamic", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam", - "compact" + "3d reconstruction", + "efficient", + "ar", + "fast", + "compact", + "localization" ], "citations": 0, "semantic_url": "" @@ -19583,9 +19754,9 @@ "github_url": "", "keywords": [ "nerf", + "3d gaussian", "ar", - "sparse-view", - "3d gaussian" + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -19605,11 +19776,11 @@ ], "github_url": "https://github.com/tberriel/FeatSplat", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "fast", - "3d gaussian", - "semantic" + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -19629,10 +19800,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "deformation", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "deformation" ], "citations": 0, "semantic_url": "" @@ -19659,10 +19830,12 @@ ], "github_url": "", "keywords": [ - "ar", - "face", "3d gaussian", + "lighting", + "relighting", "efficient", + "ar", + "face", "compact" ], "citations": 0, @@ -19686,8 +19859,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -19714,11 +19887,11 @@ "github_url": "https://github.com/caiyuanhao1998/HDR-GS", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -19742,12 +19915,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian", - "efficient" + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -19773,11 +19946,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "lighting", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -19807,12 +19981,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "geometry", "head", - "efficient", + "geometry", + "gaussian splatting", "tracking", + "efficient", + "ar", "body" ], "citations": 0, @@ -19839,10 +20013,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "high-fidelity", "nerf", "ar", - "high-fidelity", - "3d gaussian" + "illumination" ], "citations": 0, "semantic_url": "" @@ -19867,10 +20042,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "segmentation", + "autonomous driving", "ar", - "autonomous driving" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -19894,10 +20069,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "semantic", - "3d gaussian" + "gaussian splatting", + "semantic" ], "citations": 0, "semantic_url": "" @@ -19919,11 +20094,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "ar", + "semantic", "face", - "3d gaussian", - "autonomous driving", - "semantic" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -19947,9 +20122,9 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "fast", "ar", - "fast" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -19976,11 +20151,11 @@ "github_url": "https://github.com/jiangchaokang/NeuroGauss4D-PCI", "keywords": [ "dynamic", + "deformation", "4d", + "efficient", "ar", - "deformation", - "autonomous driving", - "efficient" + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -20000,12 +20175,12 @@ ], "github_url": "https://github.com/AIBluefisher/DOGS", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", + "high-fidelity", "3d reconstruction", "nerf", - "3d gaussian", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -20026,15 +20201,15 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "localization", "3d gaussian", + "slam", "tracking", + "high-fidelity", "mapping", "understanding", - "slam", - "high-fidelity" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -20059,16 +20234,17 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", + "lighting", "3d reconstruction", - "real-time rendering", "nerf", - "fast", - "3d gaussian", - "neural rendering", "efficient", - "lightweight" + "lightweight", + "ar", + "real-time rendering", + "neural rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -20090,14 +20266,17 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", - "face", "3d gaussian", + "relightable", + "gaussian splatting", + "lighting", + "nerf", + "relighting", + "efficient rendering", "efficient", - "efficient rendering" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -20123,13 +20302,13 @@ ], "github_url": "", "keywords": [ - "human", + "motion", "gaussian splatting", - "ar", + "deformation", "nerf", + "human", + "ar", "face", - "motion", - "deformation", "body" ], "citations": 0, @@ -20155,12 +20334,12 @@ ], "github_url": "", "keywords": [ - "human", "gaussian splatting", - "ar", + "high-fidelity", + "human", "avatar", - "body", - "high-fidelity" + "ar", + "body" ], "citations": 0, "semantic_url": "" @@ -20187,13 +20366,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "high-fidelity", + "human", "ar", "face", - "3d gaussian", "body", - "high-fidelity", "semantic" ], "citations": 0, @@ -20220,13 +20399,13 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "geometry", - "deformation", "3d gaussian", - "mapping" + "gaussian splatting", + "deformation", + "mapping", + "human", + "ar" ], "citations": 0, "semantic_url": "" @@ -20249,13 +20428,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "high-fidelity", "ar", - "geometry", "real-time rendering", - "face", - "3d gaussian", - "high-fidelity" + "face" ], "citations": 0, "semantic_url": "" @@ -20282,14 +20461,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", + "nerf", + "efficient", "ar", - "fast", "real-time rendering", - "nerf", - "geometry", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -20312,14 +20491,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", - "ar", "nerf", - "3d gaussian", - "efficient", "acceleration", - "semantic" + "efficient", + "ar", + "semantic", + "compression" ], "citations": 0, "semantic_url": "" @@ -20344,11 +20523,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", "3d gaussian", + "gaussian splatting", + "nerf", + "ar", "sparse-view", "compact" ], @@ -20374,15 +20553,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "avatar", "3d gaussian", - "efficient", "head", - "efficient rendering", + "gaussian splatting", "mapping", + "avatar", + "efficient", + "efficient rendering", + "ar", "body" ], "citations": 0, @@ -20413,11 +20592,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", "real-time rendering", "face", - "3d gaussian" + "reflection" ], "citations": 0, "semantic_url": "" @@ -20442,9 +20622,9 @@ ], "github_url": "https://github.com/xingy038/Dreamer-XL", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -20467,16 +20647,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "motion", "3d gaussian", + "slam", + "gaussian splatting", "tracking", + "high-fidelity", "mapping", - "compact", - "slam", - "high-fidelity" + "nerf", + "ar", + "compact" ], "citations": 0, "semantic_url": "" @@ -20500,12 +20680,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "lighting", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", "urban scene", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -20527,10 +20708,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "semantic", - "3d gaussian" + "gaussian splatting", + "semantic" ], "citations": 0, "semantic_url": "" @@ -20553,10 +20734,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "high-fidelity", - "3d gaussian" + "gaussian splatting", + "high-fidelity" ], "citations": 0, "semantic_url": "" @@ -20578,11 +20759,11 @@ "github_url": "", "keywords": [ "gaussian splatting", + "nerf", + "robotics", "ar", "real-time rendering", - "nerf", "fast", - "robotics", "compact" ], "citations": 0, @@ -20606,8 +20787,8 @@ ], "github_url": "", "keywords": [ - "human", "gaussian splatting", + "human", "ar", "face", "body" @@ -20634,18 +20815,18 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "motion", "animation", - "human", + "tracking", + "high-fidelity", "segmentation", + "human", + "avatar", "ar", - "geometry", "face", - "avatar", - "motion", - "3d gaussian", - "tracking", - "body", - "high-fidelity" + "body" ], "citations": 0, "semantic_url": "" @@ -20665,11 +20846,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -20689,10 +20870,10 @@ "github_url": "", "keywords": [ "dynamic", + "motion", "gaussian splatting", - "ar", "3d reconstruction", - "motion", + "ar", "semantic" ], "citations": 0, @@ -20712,12 +20893,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", + "neural rendering", "fast", - "3d gaussian", - "neural rendering" + "compression" ], "citations": 0, "semantic_url": "" @@ -20742,14 +20923,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "ar", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -20774,10 +20955,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", + "ar", "semantic" ], "citations": 0, @@ -20799,13 +20980,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "motion", "3d gaussian", + "motion", + "gaussian splatting", + "high-fidelity", "vr", - "high-fidelity" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -20829,15 +21010,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", "large scene", "3d gaussian", + "slam", + "gaussian splatting", "tracking", + "high-fidelity", "mapping", - "slam", - "high-fidelity" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -20862,19 +21043,19 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "slam", "gaussian splatting", + "high-fidelity", + "mapping", + "survey", "segmentation", - "ar", "3d reconstruction", "nerf", - "localization", - "3d gaussian", - "robotics", - "survey", - "mapping", "understanding", - "slam", - "high-fidelity" + "robotics", + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -20897,13 +21078,13 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", - "fast", "3d gaussian", + "gaussian splatting", + "human", "efficient", - "body" + "ar", + "body", + "fast" ], "citations": 0, "semantic_url": "" @@ -20930,11 +21111,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", - "robotics", + "gaussian splatting", "understanding", + "robotics", + "ar", "semantic" ], "citations": 0, @@ -20961,11 +21142,11 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", "ar", - "face", - "3d gaussian", - "sparse view" + "face" ], "citations": 0, "semantic_url": "" @@ -20989,8 +21170,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -21015,12 +21196,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "motion", "3d gaussian", - "outdoor" + "outdoor", + "gaussian splatting", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -21044,10 +21225,10 @@ ], "github_url": "", "keywords": [ - "ar", - "deformation", "3d gaussian", + "deformation", "efficient", + "ar", "body" ], "citations": 0, @@ -21075,14 +21256,14 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian", "efficient", - "head", - "lightweight" + "lightweight", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -21107,15 +21288,15 @@ ], "github_url": "", "keywords": [ + "large scene", + "slam", "gaussian splatting", - "ar", + "tracking", "3d reconstruction", "nerf", - "face", - "large scene", "efficient", - "tracking", - "slam", + "ar", + "face", "compact" ], "citations": 0, @@ -21141,9 +21322,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -21168,12 +21349,12 @@ ], "github_url": "https://github.com/ML-GSAI/MicroDreamer", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -21196,15 +21377,15 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "animation", "gaussian splatting", - "ar", - "face", + "high-fidelity", "avatar", - "3d gaussian", "efficient", - "head", - "high-fidelity" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -21227,16 +21408,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "compression", - "ar", "geometry", - "nerf", "3d gaussian", - "neural rendering", + "gaussian splatting", + "high-fidelity", + "nerf", + "ar", "lightweight", + "neural rendering", "compact", - "high-fidelity" + "compression" ], "citations": 0, "semantic_url": "" @@ -21261,16 +21442,16 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "high-fidelity", + "deformation", + "nerf", + "efficient", "ar", "real-time rendering", - "nerf", - "fast", "face", - "3d gaussian", - "deformation", - "efficient", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -21296,15 +21477,15 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "avatar", "3d gaussian", - "neural rendering", - "deformation", "head", + "gaussian splatting", + "high-fidelity", "vr", - "high-fidelity" + "deformation", + "avatar", + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -21325,11 +21506,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -21354,10 +21535,10 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian", + "ar", "neural rendering" ], "citations": 0, @@ -21380,10 +21561,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "face", "ar", - "3d gaussian" + "face", + "reflection" ], "citations": 0, "semantic_url": "" @@ -21408,12 +21590,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "motion", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "3d reconstruction", + "ar" ], "citations": 0, "semantic_url": "" @@ -21438,9 +21620,9 @@ ], "github_url": "", "keywords": [ - "face", + "3d gaussian", "ar", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -21464,12 +21646,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "mapping", - "slam" + "ar", + "illumination", + "localization" ], "citations": 0, "semantic_url": "" @@ -21494,11 +21677,11 @@ ], "github_url": "", "keywords": [ + "geometry", "gaussian splatting", "ar", - "geometry", - "face", - "semantic" + "semantic", + "face" ], "citations": 0, "semantic_url": "" @@ -21521,10 +21704,10 @@ ], "github_url": "https://github.com/jwubz123/DIG3D", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "gaussian splatting", + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -21550,12 +21733,12 @@ ], "github_url": "https://github.com/KU-CVLAB/GaussianTalker", "keywords": [ - "gaussian splatting", - "ar", - "fast", "head", "3d gaussian", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -21578,12 +21761,12 @@ ], "github_url": "https://github.com/CrystalWlz/OMEGAS", "keywords": [ + "large scene", "gaussian splatting", "segmentation", - "ar", "3d reconstruction", - "real-time rendering", - "large scene" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -21609,13 +21792,13 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "face", "motion", - "deformation", "head", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "deformation", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -21637,9 +21820,9 @@ ], "github_url": "", "keywords": [ + "ar", "gaussian splatting", - "tracking", - "ar" + "tracking" ], "citations": 0, "semantic_url": "" @@ -21661,13 +21844,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", - "human", - "ar", "few-shot", - "motion", - "3d gaussian", - "efficient" + "human", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -21692,17 +21875,17 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "segmentation", + "understanding", + "efficient rendering", + "efficient", "ar", - "geometry", "real-time rendering", - "3d gaussian", - "efficient", - "efficient rendering", - "understanding", - "compact", - "semantic" + "semantic", + "compact" ], "citations": 0, "semantic_url": "" @@ -21733,13 +21916,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", + "head", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "motion", - "3d gaussian", - "head" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -21761,10 +21944,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -21793,14 +21976,14 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "animation", "gaussian splatting", + "recognition", + "avatar", "ar", - "face", - "avatar", - "motion", - "3d gaussian", - "recognition" + "face" ], "citations": 0, "semantic_url": "" @@ -21823,9 +22006,9 @@ ], "github_url": "", "keywords": [ - "segmentation", + "3d gaussian", "ar", - "3d gaussian" + "segmentation" ], "citations": 0, "semantic_url": "" @@ -21851,11 +22034,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "head", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -21877,11 +22060,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -21902,15 +22085,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "dynamic", + "gaussian splatting", + "high-fidelity", "deformation", - "efficient", "efficient rendering", - "high-fidelity" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -21938,9 +22121,9 @@ ], "github_url": "", "keywords": [ - "ar", "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -21961,10 +22144,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "neural rendering", "3d gaussian", - "neural rendering" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -21986,10 +22169,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22012,14 +22195,15 @@ ], "github_url": "", "keywords": [ - "human", + "path tracing", + "3d gaussian", + "medical", "gaussian splatting", + "human", "ar", - "3d gaussian", "lightweight", "body", - "compact", - "medical" + "compact" ], "citations": 0, "semantic_url": "" @@ -22040,12 +22224,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -22068,13 +22252,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", + "vr", "nerf", - "head", "efficient", - "3d gaussian", - "vr" + "ar" ], "citations": 0, "semantic_url": "" @@ -22097,11 +22281,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "real-time rendering", - "3d gaussian", - "efficient" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -22129,10 +22313,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22161,15 +22345,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", - "face", "motion", "3d gaussian", - "efficient", + "gaussian splatting", + "3d reconstruction", "efficient rendering", - "lightweight" + "efficient", + "lightweight", + "face", + "ar" ], "citations": 0, "semantic_url": "" @@ -22197,12 +22381,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "localization", "3d gaussian", + "gaussian splatting", "efficient", - "neural rendering" + "ar", + "neural rendering", + "localization" ], "citations": 0, "semantic_url": "" @@ -22227,10 +22411,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "compact", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -22254,10 +22438,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "lighting", "gaussian splatting", + "nerf", + "relighting", "ar", - "geometry", - "nerf" + "illumination" ], "citations": 0, "semantic_url": "" @@ -22279,10 +22466,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", "high-fidelity", + "ar", "semantic" ], "citations": 0, @@ -22305,11 +22492,11 @@ ], "github_url": "", "keywords": [ - "human", + "head", "gaussian splatting", - "ar", "3d reconstruction", - "head" + "human", + "ar" ], "citations": 0, "semantic_url": "" @@ -22333,12 +22520,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "animation", - "ar", - "motion", "deformation", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -22363,12 +22550,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "nerf", "human", "ar", - "nerf", "face", - "3d gaussian", "body" ], "citations": 0, @@ -22392,13 +22579,13 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "ar", "geometry", - "real-time rendering", + "gaussian splatting", + "human", "avatar", - "efficient" + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -22423,9 +22610,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22451,12 +22638,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", "3d gaussian", + "slam", + "gaussian splatting", "mapping", - "slam" + "ar", + "illumination" ], "citations": 0, "semantic_url": "" @@ -22484,10 +22672,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "ar", "semantic" ], "citations": 0, @@ -22511,10 +22699,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22536,8 +22724,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22564,11 +22752,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -22595,10 +22783,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "gaussian splatting", "ar", - "head", - "3d gaussian", "compact" ], "citations": 0, @@ -22625,14 +22813,14 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", - "nerf", "3d gaussian", + "dynamic", + "gaussian splatting", + "vr", "deformation", - "vr" + "nerf", + "ar" ], "citations": 0, "semantic_url": "" @@ -22656,15 +22844,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "slam", "gaussian splatting", - "ar", + "mapping", "3d reconstruction", - "fast", + "ar", "face", - "localization", - "3d gaussian", - "mapping", - "slam" + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -22685,9 +22873,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22708,9 +22896,9 @@ "github_url": "", "keywords": [ "acceleration", - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22735,12 +22923,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", "3d gaussian", - "efficient" + "gaussian splatting", + "nerf", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -22762,8 +22950,8 @@ ], "github_url": "https://github.com/ZcsrenlongZ/ZoomGS", "keywords": [ - "gaussian splatting", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -22785,11 +22973,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", + "outdoor", + "gaussian splatting", "nerf", - "outdoor" + "ar" ], "citations": 0, "semantic_url": "" @@ -22817,11 +23005,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -22851,12 +23039,13 @@ ], "github_url": "", "keywords": [ + "motion", + "lighting", + "tracking", "human", - "4d", - "ar", "avatar", - "motion", - "tracking" + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -22878,11 +23067,12 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian" + "ar", + "shadow" ], "citations": 0, "semantic_url": "" @@ -22907,13 +23097,13 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "localization", "3d gaussian", + "outdoor", "mapping", "robotics", - "outdoor" + "ar", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -22938,12 +23128,12 @@ "github_url": "", "keywords": [ "dynamic", - "4d", - "ar", - "nerf", - "motion", "3d gaussian", - "efficient" + "motion", + "nerf", + "4d", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -22968,11 +23158,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "deformation", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -22998,12 +23188,12 @@ ], "github_url": "", "keywords": [ - "ar", - "high quality", - "fast", "3d gaussian", "outdoor", - "semantic" + "ar", + "semantic", + "fast", + "high quality" ], "citations": 0, "semantic_url": "" @@ -23025,11 +23215,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "fast", - "3d gaussian" - ], + "3d gaussian", + "ar", + "gaussian splatting" + ], "citations": 0, "semantic_url": "" }, @@ -23052,10 +23242,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "sparse-view", + "motion", "ar", - "motion" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -23082,14 +23272,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "fast", "urban scene", - "3d gaussian", - "autonomous driving" + "ar", + "autonomous driving", + "fast" ], "citations": 0, "semantic_url": "" @@ -23113,8 +23303,8 @@ ], "github_url": "", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -23135,13 +23325,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", + "head", + "gaussian splatting", "efficient", - "head" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -23166,10 +23356,10 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", "3d gaussian", + "dynamic", + "ar", "semantic" ], "citations": 0, @@ -23196,10 +23386,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", "ar", - "3d gaussian", - "nerf" + "reflection" ], "citations": 0, "semantic_url": "" @@ -23224,13 +23415,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", + "efficient", "ar", "real-time rendering", "fast", - "3d gaussian", - "efficient" + "compression" ], "citations": 0, "semantic_url": "" @@ -23255,9 +23446,9 @@ "animation", "gaussian splatting", "human", - "ar", "avatar", "efficient", + "ar", "body" ], "citations": 0, @@ -23285,16 +23476,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", - "fast", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", "understanding", - "slam" + "ar", + "real-time rendering", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -23321,12 +23512,12 @@ ], "github_url": "https://github.com/CVMI-Lab/3DGSR", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "face", - "3d gaussian", - "efficient" + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -23356,12 +23547,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "motion", "3d gaussian", + "gaussian splatting", "efficient", + "ar", + "face", "sparse-view" ], "citations": 0, @@ -23389,11 +23580,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -23421,16 +23612,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "nerf", - "urban scene", - "face", "3d gaussian", + "gaussian splatting", + "high-fidelity", "mapping", + "urban scene", + "nerf", "understanding", - "high-fidelity" + "ar", + "face", + "fast" ], "citations": 0, "semantic_url": "" @@ -23457,11 +23648,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "3d gaussian", + "gaussian splatting", "autonomous driving", + "nerf", + "ar", "neural rendering" ], "citations": 0, @@ -23485,13 +23676,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", - "ar", + "autonomous driving", "urban scene", - "motion", - "3d gaussian", - "neural rendering", - "autonomous driving" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -23515,12 +23706,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "3d gaussian", "outdoor", - "high-fidelity" + "gaussian splatting", + "high-fidelity", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -23546,8 +23737,8 @@ ], "github_url": "https://github.com/zsy1987/SA-GS", "keywords": [ - "gaussian splatting", - "ar" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -23573,13 +23764,13 @@ ], "github_url": "https://github.com/hustvl/TOGS", "keywords": [ + "sparse view", + "head", + "medical", "gaussian splatting", "4d", "ar", - "real-time rendering", - "head", - "sparse view", - "medical" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -23605,11 +23796,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", "nerf", - "3d gaussian", + "ar", "sparse-view" ], "citations": 0, @@ -23636,12 +23827,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -23662,11 +23853,11 @@ ], "github_url": "", "keywords": [ - "human", + "geometry", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", + "human", + "ar", "face" ], "citations": 0, @@ -23689,9 +23880,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "nerf", "ar", - "nerf" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -23716,11 +23907,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "segmentation", - "ar", "3d reconstruction", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -23746,13 +23937,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "large scene", "gaussian splatting", - "ar", - "real-time rendering", + "high-fidelity", "nerf", - "large scene", - "3d gaussian", - "high-fidelity" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -23776,14 +23967,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "high quality", "real-time rendering", - "fast", - "geometry", "face", - "3d gaussian" + "fast", + "high quality" ], "citations": 0, "semantic_url": "" @@ -23807,14 +23998,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", "3d gaussian", - "efficient", + "gaussian splatting", + "high-fidelity", "vr", - "high-fidelity" + "3d reconstruction", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -23837,8 +24028,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "semantic" ], "citations": 0, @@ -23863,12 +24054,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", + "gaussian splatting", "efficient", + "ar", + "face", "neural rendering" ], "citations": 0, @@ -23892,13 +24083,13 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", - "fast", "large scene", "3d gaussian", + "3d reconstruction", "efficient", - "semantic" + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -23925,16 +24116,16 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", - "face", - "localization", "3d gaussian", - "efficient", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "nerf", + "efficient", + "ar", + "face", + "localization" ], "citations": 0, "semantic_url": "" @@ -23959,11 +24150,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "illumination", + "fast" ], "citations": 0, "semantic_url": "" @@ -23986,14 +24178,15 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", "segmentation", + "understanding", "ar", + "semantic", "fast", - "localization", - "3d gaussian", - "understanding", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -24017,11 +24210,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -24047,16 +24240,16 @@ ], "github_url": "", "keywords": [ + "slam", + "medical", "gaussian splatting", - "ar", - "localization", - "efficient", "tracking", + "high-fidelity", "mapping", + "efficient", + "ar", "body", - "slam", - "high-fidelity", - "medical" + "localization" ], "citations": 0, "semantic_url": "" @@ -24084,11 +24277,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "high-fidelity", "4d", - "ar", - "3d gaussian", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -24114,12 +24307,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -24145,10 +24338,10 @@ ], "github_url": "", "keywords": [ - "ar", - "3d reconstruction", "3d gaussian", + "3d reconstruction", "efficient", + "ar", "sparse-view" ], "citations": 0, @@ -24170,14 +24363,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "animation", "gaussian splatting", + "efficient rendering", + "efficient", "ar", "real-time rendering", - "face", - "3d gaussian", - "efficient", - "efficient rendering" + "face" ], "citations": 0, "semantic_url": "" @@ -24200,11 +24393,11 @@ ], "github_url": "https://github.com/YihangChen-ee/HAC", "keywords": [ + "3d gaussian", "gaussian splatting", - "compression", "ar", - "3d gaussian", - "compact" + "compact", + "compression" ], "citations": 0, "semantic_url": "" @@ -24250,14 +24443,14 @@ ], "github_url": "", "keywords": [ - "dynamic", + "geometry", + "3d gaussian", + "dynamic", "gaussian splatting", - "ar", "3d reconstruction", - "high quality", + "ar", "fast", - "geometry", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -24277,9 +24470,10 @@ ], "github_url": "https://github.com/fatPeter/mini-splatting", "keywords": [ - "compression", "ar", - "efficient" + "efficient", + "lighting", + "compression" ], "citations": 0, "semantic_url": "" @@ -24310,10 +24504,10 @@ "keywords": [ "gaussian splatting", "ar", - "high quality", + "lightweight", "real-time rendering", "fast", - "lightweight", + "high quality", "compact" ], "citations": 0, @@ -24340,10 +24534,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "motion", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -24370,13 +24564,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", "geometry", - "fast", "3d gaussian", - "efficient" + "gaussian splatting", + "3d reconstruction", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -24403,15 +24597,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", - "urban scene", - "motion", "3d gaussian", + "motion", + "dynamic", + "gaussian splatting", "tracking", + "urban scene", "understanding", + "ar", "semantic" ], "citations": 0, @@ -24433,18 +24627,18 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "localization", "3d gaussian", - "efficient", - "vr", + "slam", + "gaussian splatting", "tracking", - "robotics", + "high-fidelity", + "vr", "mapping", - "slam", - "high-fidelity" + "robotics", + "efficient", + "ar", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -24467,13 +24661,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", + "slam", + "gaussian splatting", "tracking", + "high-fidelity", "mapping", - "slam", - "high-fidelity" + "ar" ], "citations": 0, "semantic_url": "" @@ -24500,12 +24694,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "4d", - "ar", - "motion", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -24536,9 +24730,9 @@ ], "github_url": "", "keywords": [ - "ar", "fast", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -24568,9 +24762,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -24595,12 +24789,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "face", "3d gaussian", + "gaussian splatting", + "high-fidelity", "efficient", - "high-fidelity" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -24621,12 +24815,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", + "nerf", "ar", "real-time rendering", - "nerf", - "motion", - "3d gaussian", "neural rendering" ], "citations": 0, @@ -24651,17 +24845,17 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", + "slam", "gaussian splatting", + "tracking", + "mapping", "segmentation", - "3d gaussian", "efficient", - "head", "lightweight", - "tracking", - "mapping", - "slam", - "compact", - "semantic" + "semantic", + "compact" ], "citations": 0, "semantic_url": "" @@ -24683,10 +24877,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "ar", "semantic" ], "citations": 0, @@ -24713,15 +24907,15 @@ ], "github_url": "", "keywords": [ - "human", - "ar", - "fast", - "nerf", - "avatar", + "3d gaussian", "motion", "deformation", - "3d gaussian", - "body" + "nerf", + "avatar", + "human", + "ar", + "body", + "fast" ], "citations": 0, "semantic_url": "" @@ -24749,11 +24943,11 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", "ar", - "fast", - "head", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -24772,10 +24966,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "gaussian splatting", + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -24803,12 +24997,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", - "geometry", "deformation", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -24832,12 +25026,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", + "3d gaussian", "motion", + "gaussian splatting", "deformation", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -24860,10 +25054,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d reconstruction", - "3d gaussian" + "gaussian splatting", + "3d reconstruction" ], "citations": 0, "semantic_url": "" @@ -24887,10 +25081,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "lighting", "gaussian splatting", "understanding", - "ar", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -24913,12 +25108,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "localization", "3d gaussian", + "gaussian splatting", + "mapping", "efficient", - "mapping" + "ar", + "localization" ], "citations": 0, "semantic_url": "" @@ -24939,9 +25134,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "face", + "nerf", "ar", - "nerf" + "face", + "reflection" ], "citations": 0, "semantic_url": "" @@ -24965,11 +25161,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -24992,13 +25188,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "deformation", - "3d gaussian", "understanding", - "semantic" + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -25026,13 +25222,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "slam", "gaussian splatting", "ar", - "geometry", "real-time rendering", "fast", - "3d gaussian", - "slam", "compact" ], "citations": 0, @@ -25059,18 +25255,18 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", + "survey", "3d reconstruction", - "fast", "nerf", - "geometry", - "3d gaussian", - "efficient", + "understanding", "efficient rendering", - "survey", - "understanding" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -25094,9 +25290,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -25119,9 +25315,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "relightable", + "lighting", "human", + "relighting", "ar", - "3d gaussian" + "illumination" ], "citations": 0, "semantic_url": "" @@ -25142,10 +25342,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "segmentation", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -25168,11 +25368,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", "gaussian splatting", "ar", - "fast", - "3d gaussian", - "outdoor" + "fast" ], "citations": 0, "semantic_url": "" @@ -25194,8 +25394,8 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -25225,11 +25425,11 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -25252,15 +25452,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "real-time rendering", - "face", "3d gaussian", - "efficient", + "gaussian splatting", + "high-fidelity", "mapping", - "high-fidelity" + "efficient", + "ar", + "real-time rendering", + "face" ], "citations": 0, "semantic_url": "" @@ -25282,12 +25482,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", "3d gaussian", - "efficient" + "gaussian splatting", + "efficient", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -25316,13 +25516,15 @@ ], "github_url": "", "keywords": [ - "ar", + "3d gaussian", + "outdoor", + "vr", "3d reconstruction", "nerf", - "3d gaussian", "efficient", - "vr", - "outdoor" + "shadow", + "ar", + "reflection" ], "citations": 0, "semantic_url": "" @@ -25347,11 +25549,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "robotics", "ar", - "face", - "3d gaussian", - "robotics" + "face" ], "citations": 0, "semantic_url": "" @@ -25385,14 +25587,14 @@ ], "github_url": "https://github.com/MrSecant/GaussianGrasper", "keywords": [ - "human", - "gaussian splatting", - "ar", "geometry", - "nerf", "3d gaussian", + "gaussian splatting", + "nerf", + "human", + "robotics", "efficient", - "robotics" + "ar" ], "citations": 0, "semantic_url": "" @@ -25414,12 +25616,12 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", "3d gaussian", + "dynamic", + "robotics", "efficient", - "robotics" + "ar" ], "citations": 0, "semantic_url": "" @@ -25443,11 +25645,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "motion", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -25472,10 +25674,10 @@ ], "github_url": "https://github.com/yjhboy/Hyper3DG", "keywords": [ - "head", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "head" ], "citations": 0, "semantic_url": "" @@ -25499,11 +25701,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -25528,10 +25730,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "fast", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -25561,10 +25763,10 @@ ], "github_url": "https://github.com/Xinjie-Q/GaussianImage", "keywords": [ - "gaussian splatting", - "compression", + "fast", "ar", - "fast" + "gaussian splatting", + "compression" ], "citations": 0, "semantic_url": "" @@ -25588,12 +25790,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "outdoor", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian", - "outdoor" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -25646,11 +25848,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", - "real-time rendering", - "3d gaussian", - "efficient" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -25674,12 +25876,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "segmentation", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam", + "segmentation", "semantic" ], "citations": 0, @@ -25705,14 +25907,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", - "few-shot", "3d gaussian", + "gaussian splatting", + "few-shot", "efficient", - "sparse-view" + "ar", + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -25735,10 +25937,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "nerf", "3d gaussian", - "nerf" + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -25761,8 +25963,8 @@ ], "github_url": "https://github.com/heheyas/V3D", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -25788,10 +25990,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "ar", - "fast", "nerf", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -25818,16 +26020,16 @@ ], "github_url": "", "keywords": [ + "geometry", + "motion", + "head", "animation", "gaussian splatting", + "deformation", "human", + "avatar", "ar", - "geometry", "face", - "avatar", - "motion", - "deformation", - "head", "body" ], "citations": 0, @@ -25853,13 +26055,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", + "motion", "gaussian splatting", "ar", - "geometry", - "high quality", "fast", - "motion", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -25886,11 +26088,11 @@ ], "github_url": "https://github.com/caiyuanhao1998/X-Gaussian", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", "efficient", + "ar", "sparse-view" ], "citations": 0, @@ -25913,9 +26115,9 @@ ], "github_url": "", "keywords": [ - "ar", "3d gaussian", - "mapping" + "mapping", + "ar" ], "citations": 0, "semantic_url": "" @@ -25942,9 +26144,9 @@ "github_url": "", "keywords": [ "gaussian splatting", + "nerf", "ar", "fast", - "nerf", "localization" ], "citations": 0, @@ -25970,12 +26172,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "efficient", "ar", - "fast", "real-time rendering", - "3d gaussian", "neural rendering", - "efficient", + "fast", "compact" ], "citations": 0, @@ -25996,13 +26198,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "animation", "gaussian splatting", + "mapping", "ar", - "geometry", - "3d gaussian", - "neural rendering", - "mapping" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -26031,14 +26233,14 @@ ], "github_url": "", "keywords": [ + "large scene", + "3d gaussian", "gaussian splatting", + "high-fidelity", + "nerf", "ar", "real-time rendering", - "nerf", - "fast", - "large scene", - "3d gaussian", - "high-fidelity" + "fast" ], "citations": 0, "semantic_url": "" @@ -26065,13 +26267,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "high-fidelity", "human", + "avatar", "ar", "face", - "avatar", - "3d gaussian", - "body", - "high-fidelity" + "body" ], "citations": 0, "semantic_url": "" @@ -26098,11 +26300,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "real-time rendering", "face", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -26128,11 +26330,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", "ar", "face", - "motion", - "3d gaussian", "neural rendering" ], "citations": 0, @@ -26155,12 +26357,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -26187,15 +26389,16 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", - "nerf", - "localization", "3d gaussian", - "survey", + "slam", + "lighting", + "gaussian splatting", "mapping", + "survey", + "nerf", + "ar", "body", - "slam" + "localization" ], "citations": 0, "semantic_url": "" @@ -26223,15 +26426,18 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "lighting", "animation", + "high-fidelity", "human", - "ar", - "geometry", - "real-time rendering", - "3d gaussian", + "relighting", "efficient", - "high-fidelity" + "ar", + "illumination", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -26258,11 +26464,12 @@ ], "github_url": "https://github.com/GaussianObject/GaussianObject", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", - "sparse view" + "ar", + "illumination" ], "citations": 0, "semantic_url": "" @@ -26290,11 +26497,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -26322,9 +26529,9 @@ ], "github_url": "https://github.com/Zhen-Dong/Magic-Me", "keywords": [ - "face", + "3d gaussian", "ar", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -26351,10 +26558,10 @@ ], "github_url": "", "keywords": [ + "efficient", "gaussian splatting", - "ar", "3d reconstruction", - "efficient" + "ar" ], "citations": 0, "semantic_url": "" @@ -26380,10 +26587,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "3d gaussian" + "3d gaussian", + "ar", + "gaussian splatting" ], "citations": 0, "semantic_url": "" @@ -26408,13 +26615,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "nerf", "3d gaussian", - "survey", + "gaussian splatting", "mapping", - "robotics" + "survey", + "nerf", + "robotics", + "ar" ], "citations": 0, "semantic_url": "" @@ -26439,10 +26646,10 @@ "github_url": "", "keywords": [ "gaussian splatting", - "ar", "nerf", - "face", - "avatar" + "avatar", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -26465,10 +26672,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "face", + "3d gaussian", "ar", - "3d gaussian" + "gaussian splatting", + "face" ], "citations": 0, "semantic_url": "" @@ -26490,12 +26697,12 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", + "avatar", "ar", "face", - "avatar", - "head", - "3d gaussian", "semantic" ], "citations": 0, @@ -26522,12 +26729,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "face", + "3d gaussian", + "gaussian splatting", "deformation", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -26550,14 +26757,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "gaussian splatting", - "human", - "ar", + "vr", "deformation", - "3d gaussian", + "human", "efficient", - "head", - "vr" + "ar" ], "citations": 0, "semantic_url": "" @@ -26582,15 +26789,15 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "4d", - "ar", - "motion", "3d gaussian", + "motion", + "gaussian splatting", + "high-fidelity", "deformation", - "efficient", "acceleration", - "high-fidelity" + "4d", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -26617,14 +26824,14 @@ ], "github_url": "", "keywords": [ + "geometry", + "slam", "gaussian splatting", "segmentation", - "ar", - "geometry", - "real-time rendering", "understanding", - "slam", - "semantic" + "ar", + "semantic", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -26650,9 +26857,9 @@ "animation", "gaussian splatting", "ar", + "face", "real-time rendering", - "fast", - "face" + "fast" ], "citations": 0, "semantic_url": "" @@ -26679,13 +26886,13 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "3d gaussian", + "head", "animation", - "ar", + "gaussian splatting", "nerf", "avatar", - "3d gaussian", - "head" + "ar" ], "citations": 0, "semantic_url": "" @@ -26710,11 +26917,11 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "ar", - "geometry", - "face", - "3d gaussian" + "face" ], "citations": 0, "semantic_url": "" @@ -26738,11 +26945,11 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian", - "neural rendering", - "sparse view" + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -26767,14 +26974,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", + "motion", "gaussian splatting", "ar", "real-time rendering", - "fast", - "motion", - "3d gaussian", - "head", - "original gaussian splatting" + "fast" ], "citations": 0, "semantic_url": "" @@ -26800,13 +27006,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", "segmentation", "ar", - "geometry", "real-time rendering", - "fast", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -26838,12 +27044,13 @@ "keywords": [ "dynamic", "gaussian splatting", - "human", - "segmentation", - "ar", + "vr", "deformation", + "segmentation", + "human", "efficient", - "vr", + "shadow", + "ar", "body" ], "citations": 0, @@ -26895,17 +27102,17 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "4d", - "ar", "geometry", - "nerf", - "face", "3d gaussian", + "dynamic", + "gaussian splatting", "deformation", + "nerf", + "4d", "efficient", - "lightweight" + "lightweight", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -26938,13 +27145,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", "animation", + "deformation", "ar", "face", - "motion", - "deformation", - "3d gaussian" + "reflection" ], "citations": 0, "semantic_url": "" @@ -26968,10 +27176,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "3d gaussian", "ar", - "localization", - "3d gaussian" + "gaussian splatting", + "localization" ], "citations": 0, "semantic_url": "" @@ -26993,8 +27201,9 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", + "lighting", "ar", + "gaussian splatting", "3d reconstruction" ], "citations": 0, @@ -27016,14 +27225,14 @@ "github_url": "", "keywords": [ "dynamic", + "medical", "gaussian splatting", - "ar", + "vr", "3d reconstruction", "nerf", "efficient", - "vr", - "body", - "medical" + "ar", + "body" ], "citations": 0, "semantic_url": "" @@ -27047,16 +27256,16 @@ ], "github_url": "", "keywords": [ - "animation", - "gaussian splatting", - "ar", "geometry", - "face", - "avatar", - "deformation", "3d gaussian", "head", - "high-fidelity" + "gaussian splatting", + "animation", + "high-fidelity", + "deformation", + "avatar", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -27079,15 +27288,15 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "gaussian splatting", - "ar", - "face", + "tracking", "deformation", - "3d gaussian", "efficient", - "head", "lightweight", - "tracking" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -27112,14 +27321,14 @@ ], "github_url": "https://github.com/HKU-MedAI/EndoGS", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", - "ar", + "deformation", "3d reconstruction", - "geometry", - "face", - "3d gaussian", - "deformation" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -27141,13 +27350,13 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", "gaussian splatting", + "deformation", "human", "ar", - "geometry", - "deformation", - "3d gaussian", "body" ], "citations": 0, @@ -27170,14 +27379,14 @@ ], "github_url": "", "keywords": [ + "sparse view", "dynamic", "gaussian splatting", "4d", + "efficient", "ar", "real-time rendering", - "fast", - "efficient", - "sparse view" + "fast" ], "citations": 0, "semantic_url": "" @@ -27211,10 +27420,10 @@ ], "github_url": "https://github.com/zhanghm1995/Forge_VFM4AD", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", + "ar", "autonomous driving" ], "citations": 0, @@ -27236,14 +27445,17 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", + "dynamic", "motion", - "neural rendering", + "outdoor", + "lighting", + "gaussian splatting", "head", - "outdoor" + "relighting", + "ar", + "shadow", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -27267,10 +27479,10 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "gaussian splatting", + "ar", "lightweight" ], "citations": 0, @@ -27295,13 +27507,13 @@ ], "github_url": "", "keywords": [ - "segmentation", - "ar", "3d gaussian", + "segmentation", + "understanding", "efficient", - "compact", + "ar", "semantic", - "understanding" + "compact" ], "citations": 0, "semantic_url": "" @@ -27326,11 +27538,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -27353,13 +27565,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "3d reconstruction", - "real-time rendering", "3d gaussian", + "gaussian splatting", "survey", - "understanding" + "3d reconstruction", + "understanding", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -27380,12 +27592,12 @@ ], "github_url": "", "keywords": [ - "human", + "motion", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian" + "human", + "ar" ], "citations": 0, "semantic_url": "" @@ -27408,14 +27620,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "fast", - "nerf", - "motion", "3d gaussian", - "mapping" + "motion", + "lighting", + "gaussian splatting", + "mapping", + "nerf", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -27440,9 +27653,9 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "ar", - "3d gaussian", "semantic" ], "citations": 0, @@ -27467,13 +27680,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", "3d gaussian", - "efficient", + "gaussian splatting", "understanding", - "semantic" + "efficient", + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -27502,14 +27715,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "urban scene", + "nerf", "4d", "ar", - "nerf", - "urban scene", - "3d gaussian", - "autonomous driving", - "semantic" + "semantic", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -27532,11 +27745,11 @@ ], "github_url": "", "keywords": [ + "motion", + "3d gaussian", "gaussian splatting", "ar", - "real-time rendering", - "motion", - "3d gaussian" + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -27560,11 +27773,11 @@ "github_url": "", "keywords": [ "dynamic", - "4d", - "ar", - "motion", "3d gaussian", - "efficient" + "motion", + "4d", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -27591,12 +27804,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "4d", - "ar", "motion", + "gaussian splatting", "deformation", - "efficient" + "4d", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -27620,10 +27833,10 @@ "github_url": "https://github.com/oppo-us-research/SpacetimeGaussians", "keywords": [ "dynamic", + "3d gaussian", + "motion", "ar", "real-time rendering", - "motion", - "3d gaussian", "compact" ], "citations": 0, @@ -27647,12 +27860,12 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", + "human", "efficient", + "ar", "semantic" ], "citations": 0, @@ -27678,11 +27891,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "segmentation", - "ar", "nerf", - "3d gaussian", "understanding", + "ar", "semantic" ], "citations": 0, @@ -27707,11 +27920,11 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", "3d gaussian", "efficient", - "sparse-view" + "ar", + "sparse-view", + "fast" ], "citations": 0, "semantic_url": "" @@ -27734,16 +27947,16 @@ "github_url": "https://github.com/longxiang-ai/Human101", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "animation", + "high-fidelity", + "nerf", "human", + "efficient", "ar", "real-time rendering", - "nerf", - "3d gaussian", - "efficient", - "body", - "high-fidelity" + "body" ], "citations": 0, "semantic_url": "" @@ -27771,14 +27984,14 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", + "3d gaussian", "animation", + "gaussian splatting", + "deformation", "human", - "ar", "avatar", - "deformation", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -27802,15 +28015,15 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "dynamic", + "motion", "gaussian splatting", - "4d", "animation", - "ar", - "geometry", - "motion", - "3d gaussian", - "deformation" + "deformation", + "4d", + "ar" ], "citations": 0, "semantic_url": "" @@ -27834,9 +28047,11 @@ "github_url": "", "keywords": [ "gaussian splatting", + "nerf", "ar", + "shadow", "fast", - "nerf" + "reflection" ], "citations": 0, "semantic_url": "" @@ -27857,12 +28072,12 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", - "efficient" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -27888,12 +28103,12 @@ "github_url": "", "keywords": [ "dynamic", - "gaussian splatting", - "ar", + "3d gaussian", "motion", + "head", + "gaussian splatting", "deformation", - "3d gaussian", - "head" + "ar" ], "citations": 0, "semantic_url": "" @@ -27915,11 +28130,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", + "efficient", "ar", "fast", - "3d gaussian", - "efficient", "compact" ], "citations": 0, @@ -27943,13 +28158,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", - "3d gaussian", + "efficient rendering", "efficient", - "efficient rendering" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -27976,15 +28191,15 @@ ], "github_url": "", "keywords": [ - "animation", - "gaussian splatting", - "ar", "geometry", - "fast", + "3d gaussian", + "gaussian splatting", + "animation", "nerf", "avatar", - "3d gaussian", - "body" + "ar", + "body", + "fast" ], "citations": 0, "semantic_url": "" @@ -28009,14 +28224,14 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", "geometry", - "nerf", - "real-time rendering", - "motion", "3d gaussian", - "deformation" + "dynamic", + "motion", + "deformation", + "nerf", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -28036,9 +28251,9 @@ ], "github_url": "", "keywords": [ + "ar", "gaussian splatting", - "face", - "ar" + "face" ], "citations": 0, "semantic_url": "" @@ -28061,8 +28276,8 @@ ], "github_url": "", "keywords": [ - "ar", - "3d gaussian" + "3d gaussian", + "ar" ], "citations": 0, "semantic_url": "" @@ -28085,15 +28300,15 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", - "ar", - "fast", + "deformation", "nerf", "avatar", - "deformation", - "3d gaussian", - "efficient" + "human", + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -28118,13 +28333,13 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "fast", + "efficient", + "ar", "face", - "3d gaussian", - "efficient" + "fast" ], "citations": 0, "semantic_url": "" @@ -28149,11 +28364,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -28178,12 +28393,12 @@ "github_url": "https://github.com/VDIGPKU/DrivingGaussian", "keywords": [ "dynamic", - "gaussian splatting", - "ar", "3d gaussian", + "gaussian splatting", + "high-fidelity", "efficient", - "autonomous driving", - "high-fidelity" + "ar", + "autonomous driving" ], "citations": 0, "semantic_url": "" @@ -28207,11 +28422,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", + "motion", "gaussian splatting", - "ar", "nerf", - "motion", - "3d gaussian", + "ar", "neural rendering" ], "citations": 0, @@ -28235,15 +28450,15 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "fast", - "motion", "3d gaussian", - "efficient", + "motion", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "efficient", + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -28267,13 +28482,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", "human", - "ar", - "real-time rendering", "avatar", - "3d gaussian", - "efficient" + "efficient", + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -28297,10 +28512,10 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -28328,12 +28543,17 @@ ], "github_url": "https://github.com/guduxiaolang/GIR", "keywords": [ - "ar", "geometry", - "real-time rendering", "3d gaussian", + "relightable", + "lighting", + "relighting", "efficient", - "lightweight" + "light transport", + "illumination", + "real-time rendering", + "lightweight", + "ar" ], "citations": 0, "semantic_url": "" @@ -28360,11 +28580,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", - "3d gaussian", - "understanding" + "understanding", + "ar" ], "citations": 0, "semantic_url": "" @@ -28386,14 +28606,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", - "real-time rendering", "nerf", - "3d gaussian", "efficient", - "lightweight" + "lightweight", + "ar", + "real-time rendering", + "fast" ], "citations": 0, "semantic_url": "" @@ -28418,13 +28638,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "gaussian splatting", - "ar", - "avatar", "deformation", - "3d gaussian", + "avatar", "efficient", - "head" + "ar" ], "citations": 0, "semantic_url": "" @@ -28448,17 +28668,23 @@ ], "github_url": "", "keywords": [ - "dynamic", - "human", - "ar", "geometry", - "face", - "avatar", "3d gaussian", - "efficient", + "relightable", + "dynamic", + "lighting", "head", + "high-fidelity", "vr", - "high-fidelity" + "reflection", + "human", + "avatar", + "relighting", + "efficient", + "ar", + "illumination", + "face", + "global illumination" ], "citations": 0, "semantic_url": "" @@ -28484,19 +28710,19 @@ ], "github_url": "", "keywords": [ - "human", - "gaussian splatting", - "4d", - "compression", - "ar", - "motion", + "head", "3d gaussian", + "motion", + "gaussian splatting", + "tracking", + "high-fidelity", "deformation", + "human", + "4d", "efficient", - "head", - "tracking", + "ar", "compact", - "high-fidelity" + "compression" ], "citations": 0, "semantic_url": "" @@ -28519,15 +28745,15 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", "gaussian splatting", + "deformation", + "3d reconstruction", + "nerf", "4d", "ar", - "3d reconstruction", "real-time rendering", - "nerf", - "fast", - "3d gaussian", - "deformation" + "fast" ], "citations": 0, "semantic_url": "" @@ -28550,14 +28776,14 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", - "real-time rendering", - "localization", "3d gaussian", + "slam", + "gaussian splatting", "tracking", "mapping", - "slam" + "ar", + "real-time rendering", + "localization" ], "citations": 0, "semantic_url": "" @@ -28585,14 +28811,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "fast", "nerf", - "3d gaussian", "efficient", - "semantic" + "ar", + "semantic", + "fast" ], "citations": 0, "semantic_url": "" @@ -28623,13 +28849,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "human", - "ar", - "face", "avatar", - "3d gaussian", - "head", - "lightweight" + "ar", + "lightweight", + "face" ], "citations": 0, "semantic_url": "" @@ -28649,13 +28875,13 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "nerf", + "human", "ar", - "fast", "real-time rendering", - "nerf", - "3d gaussian" + "fast" ], "citations": 0, "semantic_url": "" @@ -28680,13 +28906,13 @@ ], "github_url": "", "keywords": [ + "head", + "3d gaussian", "animation", "gaussian splatting", - "ar", - "real-time rendering", "avatar", - "3d gaussian", - "head" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -28714,18 +28940,18 @@ ], "github_url": "", "keywords": [ - "dynamic", - "ar", + "sparse view", "geometry", - "avatar", - "motion", "3d gaussian", - "deformation", + "motion", + "dynamic", "head", - "sparse view", + "high-fidelity", + "deformation", + "avatar", + "ar", "lightweight", - "sparse-view", - "high-fidelity" + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -28750,12 +28976,11 @@ ], "github_url": "", "keywords": [ - "human", + "3d gaussian", "gaussian splatting", + "human", "ar", - "3d gaussian", - "sparse-view", - "original gaussian splatting" + "sparse-view" ], "citations": 0, "semantic_url": "" @@ -28780,13 +29005,13 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "3d gaussian", - "efficient", - "robotics", + "gaussian splatting", + "high-fidelity", "understanding", - "high-fidelity" + "robotics", + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -28810,12 +29035,12 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", + "sparse view", "geometry", + "gaussian splatting", "nerf", - "neural rendering", - "sparse view" + "ar", + "neural rendering" ], "citations": 0, "semantic_url": "" @@ -28841,13 +29066,13 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", + "mapping", "human", - "ar", "avatar", - "motion", - "3d gaussian", "efficient", - "mapping" + "ar" ], "citations": 0, "semantic_url": "" @@ -28874,15 +29099,15 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "localization", "3d gaussian", + "slam", "tracking", - "robotics", + "high-fidelity", "mapping", - "slam", - "high-fidelity" + "robotics", + "ar", + "fast", + "localization" ], "citations": 0, "semantic_url": "" @@ -28906,10 +29131,10 @@ ], "github_url": "https://github.com/nerfstudio-project/gsplat", "keywords": [ - "gaussian splatting", - "ar", "nerf", - "efficient" + "efficient", + "gaussian splatting", + "ar" ], "citations": 0, "semantic_url": "" @@ -28934,12 +29159,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "head", "animation", - "ar", - "face", "avatar", - "3d gaussian", - "head" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -28964,15 +29189,15 @@ ], "github_url": "", "keywords": [ - "dynamic", - "gaussian splatting", - "ar", "geometry", - "motion", "3d gaussian", + "dynamic", + "motion", + "gaussian splatting", + "high-fidelity", "deformation", - "compact", - "high-fidelity" + "ar", + "compact" ], "citations": 0, "semantic_url": "" @@ -28996,19 +29221,19 @@ ], "github_url": "https://github.com/chiehwangs/gaussian-head", "keywords": [ - "dynamic", - "animation", - "human", - "ar", "geometry", - "avatar", - "motion", "3d gaussian", - "deformation", + "dynamic", + "motion", + "animation", "head", + "high-fidelity", + "deformation", "acceleration", - "compact", - "high-fidelity" + "avatar", + "human", + "ar", + "compact" ], "citations": 0, "semantic_url": "" @@ -29031,15 +29256,15 @@ ], "github_url": "", "keywords": [ - "ar", - "fast", - "face", - "avatar", + "head", "3d gaussian", + "high-fidelity", + "avatar", "efficient", - "head", "lightweight", - "high-fidelity" + "ar", + "face", + "fast" ], "citations": 0, "semantic_url": "" @@ -29063,9 +29288,9 @@ "github_url": "", "keywords": [ "dynamic", - "deformation", + "3d gaussian", "ar", - "3d gaussian" + "deformation" ], "citations": 0, "semantic_url": "" @@ -29094,12 +29319,12 @@ ], "github_url": "", "keywords": [ - "ar", "geometry", - "nerf", - "face", "3d gaussian", - "high-fidelity" + "high-fidelity", + "nerf", + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -29125,11 +29350,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "ar", - "high quality", "fast", - "3d gaussian" + "high quality" ], "citations": 0, "semantic_url": "" @@ -29154,11 +29379,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", "segmentation", - "ar", - "3d gaussian", - "efficient" + "efficient", + "ar" ], "citations": 0, "semantic_url": "" @@ -29181,12 +29406,12 @@ ], "github_url": "https://github.com/lkeab/gaussian-grouping", "keywords": [ - "gaussian splatting", - "ar", "geometry", - "nerf", "3d gaussian", + "gaussian splatting", + "nerf", "understanding", + "ar", "compact" ], "citations": 0, @@ -29209,11 +29434,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", "nerf", "few-shot", - "3d gaussian" + "ar" ], "citations": 0, "semantic_url": "" @@ -29234,12 +29459,12 @@ ], "github_url": "", "keywords": [ + "geometry", + "3d gaussian", "gaussian splatting", - "ar", "3d reconstruction", - "geometry", - "face", - "3d gaussian" + "ar", + "face" ], "citations": 0, "semantic_url": "" @@ -29264,12 +29489,12 @@ ], "github_url": "", "keywords": [ + "sparse view", + "3d gaussian", "gaussian splatting", - "ar", - "real-time rendering", "nerf", - "3d gaussian", - "sparse view" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -29292,12 +29517,12 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "motion", "gaussian splatting", + "efficient", "ar", "fast", - "motion", - "3d gaussian", - "efficient", "compact" ], "citations": 0, @@ -29330,11 +29555,12 @@ "keywords": [ "dynamic", "gaussian splatting", - "ar", - "fast", - "deformation", "tracking", - "robotics" + "deformation", + "robotics", + "ar", + "shadow", + "fast" ], "citations": 0, "semantic_url": "" @@ -29359,10 +29585,11 @@ ], "github_url": "", "keywords": [ - "gaussian splatting", - "ar", "geometry", "3d gaussian", + "lighting", + "gaussian splatting", + "ar", "neural rendering" ], "citations": 0, @@ -29387,14 +29614,14 @@ "github_url": "", "keywords": [ "dynamic", + "3d gaussian", + "large scene", "gaussian splatting", - "ar", - "real-time rendering", "urban scene", - "large scene", - "3d gaussian", + "acceleration", "efficient", - "acceleration" + "ar", + "real-time rendering" ], "citations": 0, "semantic_url": "" @@ -29417,14 +29644,14 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "segmentation", + "understanding", + "efficient", "ar", + "semantic", "real-time rendering", - "localization", - "3d gaussian", - "efficient", - "understanding", - "semantic" + "localization" ], "citations": 0, "semantic_url": "" @@ -29446,11 +29673,11 @@ ], "github_url": "", "keywords": [ + "3d gaussian", "gaussian splatting", - "ar", - "fast", "nerf", - "3d gaussian" + "ar", + "fast" ], "citations": 0, "semantic_url": "" @@ -29474,15 +29701,15 @@ ], "github_url": "https://github.com/apple/ml-hugs", "keywords": [ + "3d gaussian", "animation", "gaussian splatting", "human", - "ar", - "fast", "avatar", - "3d gaussian", + "ar", + "body", "neural rendering", - "body" + "fast" ], "citations": 0, "semantic_url": "" @@ -29504,10 +29731,2408 @@ ], "github_url": "", "keywords": [ + "semantic", + "ar", + "gaussian splatting", + "face" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "FisherRF: Active View Selection and Uncertainty Quantification for Radiance Fields using Fisher Information", + "authors": [ + "Wen Jiang", + "Boshu Lei", + "Kostas Daniilidis" + ], + "abstract": "This study addresses the challenging problem of active view selection and uncertainty quantification within the domain of Radiance Fields. Neural Radiance Fields (NeRF) have greatly advanced image rendering and reconstruction, but the limited availability of 2D images poses uncertainties stemming from occlusions, depth ambiguities, and imaging errors. Efficiently selecting informative views becomes crucial, and quantifying NeRF model uncertainty presents intricate challenges. Existing approaches either depend on model architecture or are based on assumptions regarding density distributions that are not generally applicable. By leveraging Fisher Information, we efficiently quantify observed information within Radiance Fields without ground truth data. This can be used for the next best view selection and pixel-wise uncertainty quantification. Our method overcomes existing limitations on model architecture and effectiveness, achieving state-of-the-art results in both view selection and uncertainty quantification, demonstrating its potential to advance the field of Radiance Fields. Our method with the 3D Gaussian Splatting backend could perform view selections at 70 fps.", + "arxiv_url": "http://arxiv.org/abs/2311.17874v1", + "pdf_url": "http://arxiv.org/pdf/2311.17874v1", + "published_date": "2023-11-29", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", "gaussian splatting", + "nerf", + "efficient", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Gaussian Shell Maps for Efficient 3D Human Generation", + "authors": [ + "Rameen Abdal", + "Wang Yifan", + "Zifan Shi", + "Yinghao Xu", + "Ryan Po", + "Zhengfei Kuang", + "Qifeng Chen", + "Dit-Yan Yeung", + "Gordon Wetzstein" + ], + "abstract": "Efficient generation of 3D digital humans is important in several industries, including virtual reality, social media, and cinematic production. 3D generative adversarial networks (GANs) have demonstrated state-of-the-art (SOTA) quality and diversity for generated assets. Current 3D GAN architectures, however, typically rely on volume representations, which are slow to render, thereby hampering the GAN training and requiring multi-view-inconsistent 2D upsamplers. Here, we introduce Gaussian Shell Maps (GSMs) as a framework that connects SOTA generator network architectures with emerging 3D Gaussian rendering primitives using an articulable multi shell--based scaffold. In this setting, a CNN generates a 3D texture stack with features that are mapped to the shells. The latter represent inflated and deflated versions of a template surface of a digital human in a canonical body pose. Instead of rasterizing the shells directly, we sample 3D Gaussians on the shells whose attributes are encoded in the texture features. These Gaussians are efficiently and differentiably rendered. The ability to articulate the shells is important during GAN training and, at inference time, to deform a body into arbitrary user-defined poses. Our efficient rendering scheme bypasses the need for view-inconsistent upsamplers and achieves high-quality multi-view consistent renderings at a native resolution of $512 \\times 512$ pixels. We demonstrate that GSMs successfully generate 3D humans when trained on single-view datasets, including SHHQ and DeepFashion.", + "arxiv_url": "http://arxiv.org/abs/2311.17857v1", + "pdf_url": "http://arxiv.org/pdf/2311.17857v1", + "published_date": "2023-11-29", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "human", + "efficient", + "efficient rendering", + "ar", "face", + "body" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussianShader: 3D Gaussian Splatting with Shading Functions for Reflective Surfaces", + "authors": [ + "Yingwenqi Jiang", + "Jiadong Tu", + "Yuan Liu", + "Xifeng Gao", + "Xiaoxiao Long", + "Wenping Wang", + "Yuexin Ma" + ], + "abstract": "The advent of neural 3D Gaussians has recently brought about a revolution in the field of neural rendering, facilitating the generation of high-quality renderings at real-time speeds. However, the explicit and discrete representation encounters challenges when applied to scenes featuring reflective surfaces. In this paper, we present GaussianShader, a novel method that applies a simplified shading function on 3D Gaussians to enhance the neural rendering in scenes with reflective surfaces while preserving the training and rendering efficiency. The main challenge in applying the shading function lies in the accurate normal estimation on discrete 3D Gaussians. Specifically, we proposed a novel normal estimation framework based on the shortest axis directions of 3D Gaussians with a delicately designed loss to make the consistency between the normals and the geometries of Gaussian spheres. Experiments show that GaussianShader strikes a commendable balance between efficiency and visual quality. Our method surpasses Gaussian Splatting in PSNR on specular object datasets, exhibiting an improvement of 1.57dB. When compared to prior works handling reflective surfaces, such as Ref-NeRF, our optimization time is significantly accelerated (23h vs. 0.58h). Please click on our project website to see more results.", + "arxiv_url": "http://arxiv.org/abs/2311.17977v1", + "pdf_url": "http://arxiv.org/pdf/2311.17977v1", + "published_date": "2023-11-29", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "nerf", "ar", - "semantic" + "face", + "neural rendering" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "LightGaussian: Unbounded 3D Gaussian Compression with 15x Reduction and 200+ FPS", + "authors": [ + "Zhiwen Fan", + "Kevin Wang", + "Kairun Wen", + "Zehao Zhu", + "Dejia Xu", + "Zhangyang Wang" + ], + "abstract": "Recent advances in real-time neural rendering using point-based techniques have enabled broader adoption of 3D representations. However, foundational approaches like 3D Gaussian Splatting impose substantial storage overhead, as Structure-from-Motion (SfM) points can grow to millions, often requiring gigabyte-level disk space for a single unbounded scene. This growth presents scalability challenges and hinders splatting efficiency. To address this, we introduce LightGaussian, a method for transforming 3D Gaussians into a more compact format. Inspired by Network Pruning, LightGaussian identifies Gaussians with minimal global significance on scene reconstruction, and applies a pruning and recovery process to reduce redundancy while preserving visual quality. Knowledge distillation and pseudo-view augmentation then transfer spherical harmonic coefficients to a lower degree, yielding compact representations. Gaussian Vector Quantization, based on each Gaussian's global significance, further lowers bitwidth with minimal accuracy loss. LightGaussian achieves an average 15x compression rate while boosting FPS from 144 to 237 within the 3D-GS framework, enabling efficient complex scene representation on the Mip-NeRF 360 and Tank & Temple datasets. The proposed Gaussian pruning approach is also adaptable to other 3D representations (e.g., Scaffold-GS), demonstrating strong generalization capabilities.", + "arxiv_url": "http://arxiv.org/abs/2311.17245v6", + "pdf_url": "http://arxiv.org/pdf/2311.17245v6", + "published_date": "2023-11-28", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "head", + "3d gaussian", + "motion", + "gaussian splatting", + "nerf", + "efficient", + "ar", + "neural rendering", + "compact", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "HumanGaussian: Text-Driven 3D Human Generation with Gaussian Splatting", + "authors": [ + "Xian Liu", + "Xiaohang Zhan", + "Jiaxiang Tang", + "Ying Shan", + "Gang Zeng", + "Dahua Lin", + "Xihui Liu", + "Ziwei Liu" + ], + "abstract": "Realistic 3D human generation from text prompts is a desirable yet challenging task. Existing methods optimize 3D representations like mesh or neural fields via score distillation sampling (SDS), which suffers from inadequate fine details or excessive training time. In this paper, we propose an efficient yet effective framework, HumanGaussian, that generates high-quality 3D humans with fine-grained geometry and realistic appearance. Our key insight is that 3D Gaussian Splatting is an efficient renderer with periodic Gaussian shrinkage or growing, where such adaptive density control can be naturally guided by intrinsic human structures. Specifically, 1) we first propose a Structure-Aware SDS that simultaneously optimizes human appearance and geometry. The multi-modal score function from both RGB and depth space is leveraged to distill the Gaussian densification and pruning process. 2) Moreover, we devise an Annealed Negative Prompt Guidance by decomposing SDS into a noisier generative score and a cleaner classifier score, which well addresses the over-saturation issue. The floating artifacts are further eliminated based on Gaussian size in a prune-only phase to enhance generation smoothness. Extensive experiments demonstrate the superior efficiency and competitive quality of our framework, rendering vivid 3D humans under diverse scenarios. Project Page: https://alvinliu0.github.io/projects/HumanGaussian", + "arxiv_url": "http://arxiv.org/abs/2311.17061v2", + "pdf_url": "http://arxiv.org/pdf/2311.17061v2", + "published_date": "2023-11-28", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "human", + "efficient", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Point'n Move: Interactive Scene Object Manipulation on Gaussian Splatting Radiance Fields", + "authors": [ + "Jiajun Huang", + "Hongchuan Yu" + ], + "abstract": "We propose Point'n Move, a method that achieves interactive scene object manipulation with exposed region inpainting. Interactivity here further comes from intuitive object selection and real-time editing. To achieve this, we adopt Gaussian Splatting Radiance Field as the scene representation and fully leverage its explicit nature and speed advantage. Its explicit representation formulation allows us to devise a 2D prompt points to 3D mask dual-stage self-prompting segmentation algorithm, perform mask refinement and merging, minimize change as well as provide good initialization for scene inpainting and perform editing in real-time without per-editing training, all leads to superior quality and performance. We test our method by performing editing on both forward-facing and 360 scenes. We also compare our method against existing scene object removal methods, showing superior quality despite being more capable and having a speed advantage.", + "arxiv_url": "http://arxiv.org/abs/2311.16737v1", + "pdf_url": "http://arxiv.org/pdf/2311.16737v1", + "published_date": "2023-11-28", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "segmentation", + "ar", + "gaussian splatting" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Human Gaussian Splatting: Real-time Rendering of Animatable Avatars", + "authors": [ + "Arthur Moreau", + "Jifei Song", + "Helisa Dhamo", + "Richard Shaw", + "Yiren Zhou", + "Eduardo Pérez-Pellitero" + ], + "abstract": "This work addresses the problem of real-time rendering of photorealistic human body avatars learned from multi-view videos. While the classical approaches to model and render virtual humans generally use a textured mesh, recent research has developed neural body representations that achieve impressive visual quality. However, these models are difficult to render in real-time and their quality degrades when the character is animated with body poses different than the training observations. We propose an animatable human model based on 3D Gaussian Splatting, that has recently emerged as a very efficient alternative to neural radiance fields. The body is represented by a set of gaussian primitives in a canonical space which is deformed with a coarse to fine approach that combines forward skinning and local non-rigid refinement. We describe how to learn our Human Gaussian Splatting (HuGS) model in an end-to-end fashion from multi-view observations, and evaluate it against the state-of-the-art approaches for novel pose synthesis of clothed body. Our method achieves 1.5 dB PSNR improvement over the state-of-the-art on THuman4 dataset while being able to render in real-time (80 fps for 512x512 resolution).", + "arxiv_url": "http://arxiv.org/abs/2311.17113v2", + "pdf_url": "http://arxiv.org/pdf/2311.17113v2", + "published_date": "2023-11-28", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "human", + "avatar", + "efficient", + "ar", + "real-time rendering", + "body" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Multi-Scale 3D Gaussian Splatting for Anti-Aliased Rendering", + "authors": [ + "Zhiwen Yan", + "Weng Fei Low", + "Yu Chen", + "Gim Hee Lee" + ], + "abstract": "3D Gaussians have recently emerged as a highly efficient representation for 3D reconstruction and rendering. Despite its high rendering quality and speed at high resolutions, they both deteriorate drastically when rendered at lower resolutions or from far away camera position. During low resolution or far away rendering, the pixel size of the image can fall below the Nyquist frequency compared to the screen size of each splatted 3D Gaussian and leads to aliasing effect. The rendering is also drastically slowed down by the sequential alpha blending of more splatted Gaussians per pixel. To address these issues, we propose a multi-scale 3D Gaussian splatting algorithm, which maintains Gaussians at different scales to represent the same scene. Higher-resolution images are rendered with more small Gaussians, and lower-resolution images are rendered with fewer larger Gaussians. With similar training time, our algorithm can achieve 13\\%-66\\% PSNR and 160\\%-2400\\% rendering speed improvement at 4$\\times$-128$\\times$ scale rendering on Mip-NeRF360 dataset compared to the single scale 3D Gaussian splitting. Our code and more results are available on our project website https://jokeryan.github.io/projects/ms-gs/", + "arxiv_url": "http://arxiv.org/abs/2311.17089v2", + "pdf_url": "http://arxiv.org/pdf/2311.17089v2", + "published_date": "2023-11-28", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "3d reconstruction", + "nerf", + "efficient", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GART: Gaussian Articulated Template Models", + "authors": [ + "Jiahui Lei", + "Yufu Wang", + "Georgios Pavlakos", + "Lingjie Liu", + "Kostas Daniilidis" + ], + "abstract": "We introduce Gaussian Articulated Template Model GART, an explicit, efficient, and expressive representation for non-rigid articulated subject capturing and rendering from monocular videos. GART utilizes a mixture of moving 3D Gaussians to explicitly approximate a deformable subject's geometry and appearance. It takes advantage of a categorical template model prior (SMPL, SMAL, etc.) with learnable forward skinning while further generalizing to more complex non-rigid deformations with novel latent bones. GART can be reconstructed via differentiable rendering from monocular videos in seconds or minutes and rendered in novel poses faster than 150fps.", + "arxiv_url": "http://arxiv.org/abs/2311.16099v1", + "pdf_url": "http://arxiv.org/pdf/2311.16099v1", + "published_date": "2023-11-27", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "deformation", + "efficient", + "ar", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Animatable and Relightable Gaussians for High-fidelity Human Avatar Modeling", + "authors": [ + "Zhe Li", + "Yipengjing Sun", + "Zerong Zheng", + "Lizhen Wang", + "Shengping Zhang", + "Yebin Liu" + ], + "abstract": "Modeling animatable human avatars from RGB videos is a long-standing and challenging problem. Recent works usually adopt MLP-based neural radiance fields (NeRF) to represent 3D humans, but it remains difficult for pure MLPs to regress pose-dependent garment details. To this end, we introduce Animatable Gaussians, a new avatar representation that leverages powerful 2D CNNs and 3D Gaussian splatting to create high-fidelity avatars. To associate 3D Gaussians with the animatable avatar, we learn a parametric template from the input videos, and then parameterize the template on two front & back canonical Gaussian maps where each pixel represents a 3D Gaussian. The learned template is adaptive to the wearing garments for modeling looser clothes like dresses. Such template-guided 2D parameterization enables us to employ a powerful StyleGAN-based CNN to learn the pose-dependent Gaussian maps for modeling detailed dynamic appearances. Furthermore, we introduce a pose projection strategy for better generalization given novel poses. To tackle the realistic relighting of animatable avatars, we introduce physically-based rendering into the avatar representation for decomposing avatar materials and environment illumination. Overall, our method can create lifelike avatars with dynamic, realistic, generalized and relightable appearances. Experiments show that our method outperforms other state-of-the-art approaches.", + "arxiv_url": "http://arxiv.org/abs/2311.16096v4", + "pdf_url": "http://arxiv.org/pdf/2311.16096v4", + "published_date": "2023-11-27", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "relightable", + "gaussian splatting", + "lighting", + "high-fidelity", + "nerf", + "avatar", + "relighting", + "human", + "ar", + "illumination" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Relightable 3D Gaussians: Realistic Point Cloud Relighting with BRDF Decomposition and Ray Tracing", + "authors": [ + "Jian Gao", + "Chun Gu", + "Youtian Lin", + "Zhihao Li", + "Hao Zhu", + "Xun Cao", + "Li Zhang", + "Yao Yao" + ], + "abstract": "In this paper, we present a novel differentiable point-based rendering framework to achieve photo-realistic relighting. To make the reconstructed scene relightable, we enhance vanilla 3D Gaussians by associating extra properties, including normal vectors, BRDF parameters, and incident lighting from various directions. From a collection of multi-view images, the 3D scene is optimized through 3D Gaussian Splatting while BRDF and lighting are decomposed by physically based differentiable rendering. To produce plausible shadow effects in photo-realistic relighting, we introduce an innovative point-based ray tracing with the bounding volume hierarchies for efficient visibility pre-computation. Extensive experiments demonstrate our improved BRDF estimation, novel view synthesis and relighting results compared to state-of-the-art approaches. The proposed framework showcases the potential to revolutionize the mesh-based graphics pipeline with a point-based pipeline enabling editing, tracing, and relighting.", + "arxiv_url": "http://arxiv.org/abs/2311.16043v2", + "pdf_url": "http://arxiv.org/pdf/2311.16043v2", + "published_date": "2023-11-27", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "relightable", + "gaussian splatting", + "lighting", + "ray tracing", + "relighting", + "efficient", + "shadow", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussianEditor: Editing 3D Gaussians Delicately with Text Instructions", + "authors": [ + "Junjie Wang", + "Jiemin Fang", + "Xiaopeng Zhang", + "Lingxi Xie", + "Qi Tian" + ], + "abstract": "Recently, impressive results have been achieved in 3D scene editing with text instructions based on a 2D diffusion model. However, current diffusion models primarily generate images by predicting noise in the latent space, and the editing is usually applied to the whole image, which makes it challenging to perform delicate, especially localized, editing for 3D scenes. Inspired by recent 3D Gaussian splatting, we propose a systematic framework, named GaussianEditor, to edit 3D scenes delicately via 3D Gaussians with text instructions. Benefiting from the explicit property of 3D Gaussians, we design a series of techniques to achieve delicate editing. Specifically, we first extract the region of interest (RoI) corresponding to the text instruction, aligning it to 3D Gaussians. The Gaussian RoI is further used to control the editing process. Our framework can achieve more delicate and precise editing of 3D scenes than previous methods while enjoying much faster training speed, i.e. within 20 minutes on a single V100 GPU, more than twice as fast as Instruct-NeRF2NeRF (45 minutes -- 2 hours).", + "arxiv_url": "http://arxiv.org/abs/2311.16037v2", + "pdf_url": "http://arxiv.org/pdf/2311.16037v2", + "published_date": "2023-11-27", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "nerf", + "ar", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Mip-Splatting: Alias-free 3D Gaussian Splatting", + "authors": [ + "Zehao Yu", + "Anpei Chen", + "Binbin Huang", + "Torsten Sattler", + "Andreas Geiger" + ], + "abstract": "Recently, 3D Gaussian Splatting has demonstrated impressive novel view synthesis results, reaching high fidelity and efficiency. However, strong artifacts can be observed when changing the sampling rate, \\eg, by changing focal length or camera distance. We find that the source for this phenomenon can be attributed to the lack of 3D frequency constraints and the usage of a 2D dilation filter. To address this problem, we introduce a 3D smoothing filter which constrains the size of the 3D Gaussian primitives based on the maximal sampling frequency induced by the input views, eliminating high-frequency artifacts when zooming in. Moreover, replacing 2D dilation with a 2D Mip filter, which simulates a 2D box filter, effectively mitigates aliasing and dilation issues. Our evaluation, including scenarios such a training on single-scale images and testing on multiple scales, validates the effectiveness of our approach.", + "arxiv_url": "http://arxiv.org/abs/2311.16493v1", + "pdf_url": "http://arxiv.org/pdf/2311.16493v1", + "published_date": "2023-11-27", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "gaussian splatting" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Animatable 3D Gaussian: Fast and High-Quality Reconstruction of Multiple Human Avatars", + "authors": [ + "Yang Liu", + "Xiang Huang", + "Minghan Qin", + "Qinwei Lin", + "Haoqian Wang" + ], + "abstract": "Neural radiance fields are capable of reconstructing high-quality drivable human avatars but are expensive to train and render and not suitable for multi-human scenes with complex shadows. To reduce consumption, we propose Animatable 3D Gaussian, which learns human avatars from input images and poses. We extend 3D Gaussians to dynamic human scenes by modeling a set of skinned 3D Gaussians and a corresponding skeleton in canonical space and deforming 3D Gaussians to posed space according to the input poses. We introduce a multi-head hash encoder for pose-dependent shape and appearance and a time-dependent ambient occlusion module to achieve high-quality reconstructions in scenes containing complex motions and dynamic shadows. On both novel view synthesis and novel pose synthesis tasks, our method achieves higher reconstruction quality than InstantAvatar with less training time (1/60), less GPU memory (1/4), and faster rendering speed (7x). Our method can be easily extended to multi-human scenes and achieve comparable novel view synthesis results on a scene with ten people in only 25 seconds of training.", + "arxiv_url": "http://arxiv.org/abs/2311.16482v3", + "pdf_url": "http://arxiv.org/pdf/2311.16482v3", + "published_date": "2023-11-27", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "motion", + "head", + "human", + "avatar", + "ar", + "shadow", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GS-IR: 3D Gaussian Splatting for Inverse Rendering", + "authors": [ + "Zhihao Liang", + "Qi Zhang", + "Ying Feng", + "Ying Shan", + "Kui Jia" + ], + "abstract": "We propose GS-IR, a novel inverse rendering approach based on 3D Gaussian Splatting (GS) that leverages forward mapping volume rendering to achieve photorealistic novel view synthesis and relighting results. Unlike previous works that use implicit neural representations and volume rendering (e.g. NeRF), which suffer from low expressive power and high computational complexity, we extend GS, a top-performance representation for novel view synthesis, to estimate scene geometry, surface material, and environment illumination from multi-view images captured under unknown lighting conditions. There are two main problems when introducing GS to inverse rendering: 1) GS does not support producing plausible normal natively; 2) forward mapping (e.g. rasterization and splatting) cannot trace the occlusion like backward mapping (e.g. ray tracing). To address these challenges, our GS-IR proposes an efficient optimization scheme that incorporates a depth-derivation-based regularization for normal estimation and a baking-based occlusion to model indirect lighting. The flexible and expressive GS representation allows us to achieve fast and compact geometry reconstruction, photorealistic novel view synthesis, and effective physically-based rendering. We demonstrate the superiority of our method over baseline methods through qualitative and quantitative evaluations on various challenging scenes.", + "arxiv_url": "http://arxiv.org/abs/2311.16473v3", + "pdf_url": "http://arxiv.org/pdf/2311.16473v3", + "published_date": "2023-11-26", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "lighting", + "mapping", + "nerf", + "ray tracing", + "relighting", + "efficient", + "ar", + "illumination", + "face", + "fast", + "compact" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussianEditor: Swift and Controllable 3D Editing with Gaussian Splatting", + "authors": [ + "Yiwen Chen", + "Zilong Chen", + "Chi Zhang", + "Feng Wang", + "Xiaofeng Yang", + "Yikai Wang", + "Zhongang Cai", + "Lei Yang", + "Huaping Liu", + "Guosheng Lin" + ], + "abstract": "3D editing plays a crucial role in many areas such as gaming and virtual reality. Traditional 3D editing methods, which rely on representations like meshes and point clouds, often fall short in realistically depicting complex scenes. On the other hand, methods based on implicit 3D representations, like Neural Radiance Field (NeRF), render complex scenes effectively but suffer from slow processing speeds and limited control over specific scene areas. In response to these challenges, our paper presents GaussianEditor, an innovative and efficient 3D editing algorithm based on Gaussian Splatting (GS), a novel 3D representation. GaussianEditor enhances precision and control in editing through our proposed Gaussian semantic tracing, which traces the editing target throughout the training process. Additionally, we propose Hierarchical Gaussian splatting (HGS) to achieve stabilized and fine results under stochastic generative guidance from 2D diffusion models. We also develop editing strategies for efficient object removal and integration, a challenging task for existing methods. Our comprehensive experiments demonstrate GaussianEditor's superior control, efficacy, and rapid performance, marking a significant advancement in 3D editing. Project Page: https://buaacyw.github.io/gaussian-editor/", + "arxiv_url": "http://arxiv.org/abs/2311.14521v4", + "pdf_url": "http://arxiv.org/pdf/2311.14521v4", + "published_date": "2023-11-24", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "gaussian splatting", + "nerf", + "efficient", + "ar", + "semantic" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Compact 3D Gaussian Representation for Radiance Field", + "authors": [ + "Joo Chan Lee", + "Daniel Rho", + "Xiangyu Sun", + "Jong Hwan Ko", + "Eunbyung Park" + ], + "abstract": "Neural Radiance Fields (NeRFs) have demonstrated remarkable potential in capturing complex 3D scenes with high fidelity. However, one persistent challenge that hinders the widespread adoption of NeRFs is the computational bottleneck due to the volumetric rendering. On the other hand, 3D Gaussian splatting (3DGS) has recently emerged as an alternative representation that leverages a 3D Gaussisan-based representation and adopts the rasterization pipeline to render the images rather than volumetric rendering, achieving very fast rendering speed and promising image quality. However, a significant drawback arises as 3DGS entails a substantial number of 3D Gaussians to maintain the high fidelity of the rendered images, which requires a large amount of memory and storage. To address this critical issue, we place a specific emphasis on two key objectives: reducing the number of Gaussian points without sacrificing performance and compressing the Gaussian attributes, such as view-dependent color and covariance. To this end, we propose a learnable mask strategy that significantly reduces the number of Gaussians while preserving high performance. In addition, we propose a compact but effective representation of view-dependent color by employing a grid-based neural field rather than relying on spherical harmonics. Finally, we learn codebooks to compactly represent the geometric attributes of Gaussian by vector quantization. With model compression techniques such as quantization and entropy coding, we consistently show over 25$\\times$ reduced storage and enhanced rendering speed, while maintaining the quality of the scene representation, compared to 3DGS. Our work provides a comprehensive framework for 3D scene representation, achieving high performance, fast training, compactness, and real-time rendering. Our project page is available at https://maincold2.github.io/c3dgs/.", + "arxiv_url": "http://arxiv.org/abs/2311.13681v2", + "pdf_url": "http://arxiv.org/pdf/2311.13681v2", + "published_date": "2023-11-22", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "nerf", + "ar", + "real-time rendering", + "fast", + "compact", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Animatable 3D Gaussians for High-fidelity Synthesis of Human Motions", + "authors": [ + "Keyang Ye", + "Tianjia Shao", + "Kun Zhou" + ], + "abstract": "We present a novel animatable 3D Gaussian model for rendering high-fidelity free-view human motions in real time. Compared to existing NeRF-based methods, the model owns better capability in synthesizing high-frequency details without the jittering problem across video frames. The core of our model is a novel augmented 3D Gaussian representation, which attaches each Gaussian with a learnable code. The learnable code serves as a pose-dependent appearance embedding for refining the erroneous appearance caused by geometric transformation of Gaussians, based on which an appearance refinement model is learned to produce residual Gaussian properties to match the appearance in target pose. To force the Gaussians to learn the foreground human only without background interference, we further design a novel alpha loss to explicitly constrain the Gaussians within the human body. We also propose to jointly optimize the human joint parameters to improve the appearance accuracy. The animatable 3D Gaussian model can be learned with shallow MLPs, so new human motions can be synthesized in real time (66 fps on avarage). Experiments show that our model has superior performance over NeRF-based methods.", + "arxiv_url": "http://arxiv.org/abs/2311.13404v2", + "pdf_url": "http://arxiv.org/pdf/2311.13404v2", + "published_date": "2023-11-22", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "motion", + "high-fidelity", + "nerf", + "human", + "ar", + "body" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Depth-Regularized Optimization for 3D Gaussian Splatting in Few-Shot Images", + "authors": [ + "Jaeyoung Chung", + "Jeongtaek Oh", + "Kyoung Mu Lee" + ], + "abstract": "In this paper, we present a method to optimize Gaussian splatting with a limited number of images while avoiding overfitting. Representing a 3D scene by combining numerous Gaussian splats has yielded outstanding visual quality. However, it tends to overfit the training views when only a small number of images are available. To address this issue, we introduce a dense depth map as a geometry guide to mitigate overfitting. We obtained the depth map using a pre-trained monocular depth estimation model and aligning the scale and offset using sparse COLMAP feature points. The adjusted depth aids in the color-based optimization of 3D Gaussian splatting, mitigating floating artifacts, and ensuring adherence to geometric constraints. We verify the proposed method on the NeRF-LLFF dataset with varying numbers of few images. Our approach demonstrates robust geometry compared to the original method that relies solely on images. Project page: robot0321.github.io/DepthRegGS", + "arxiv_url": "http://arxiv.org/abs/2311.13398v3", + "pdf_url": "http://arxiv.org/pdf/2311.13398v3", + "published_date": "2023-11-22", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "nerf", + "few-shot", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "LucidDreamer: Domain-free Generation of 3D Gaussian Splatting Scenes", + "authors": [ + "Jaeyoung Chung", + "Suyoung Lee", + "Hyeongjin Nam", + "Jaerin Lee", + "Kyoung Mu Lee" + ], + "abstract": "With the widespread usage of VR devices and contents, demands for 3D scene generation techniques become more popular. Existing 3D scene generation models, however, limit the target scene to specific domain, primarily due to their training strategies using 3D scan dataset that is far from the real-world. To address such limitation, we propose LucidDreamer, a domain-free scene generation pipeline by fully leveraging the power of existing large-scale diffusion-based generative model. Our LucidDreamer has two alternate steps: Dreaming and Alignment. First, to generate multi-view consistent images from inputs, we set the point cloud as a geometrical guideline for each image generation. Specifically, we project a portion of point cloud to the desired view and provide the projection as a guidance for inpainting using the generative model. The inpainted images are lifted to 3D space with estimated depth maps, composing a new points. Second, to aggregate the new points into the 3D scene, we propose an aligning algorithm which harmoniously integrates the portions of newly generated 3D scenes. The finally obtained 3D scene serves as initial points for optimizing Gaussian splats. LucidDreamer produces Gaussian splats that are highly-detailed compared to the previous 3D scene generation methods, with no constraint on domain of the target scene. Project page: https://luciddreamer-cvlab.github.io/", + "arxiv_url": "http://arxiv.org/abs/2311.13384v2", + "pdf_url": "http://arxiv.org/pdf/2311.13384v2", + "published_date": "2023-11-22", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "vr", + "3d gaussian", + "ar", + "gaussian splatting" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "SuGaR: Surface-Aligned Gaussian Splatting for Efficient 3D Mesh Reconstruction and High-Quality Mesh Rendering", + "authors": [ + "Antoine Guédon", + "Vincent Lepetit" + ], + "abstract": "We propose a method to allow precise and extremely fast mesh extraction from 3D Gaussian Splatting. Gaussian Splatting has recently become very popular as it yields realistic rendering while being significantly faster to train than NeRFs. It is however challenging to extract a mesh from the millions of tiny 3D gaussians as these gaussians tend to be unorganized after optimization and no method has been proposed so far. Our first key contribution is a regularization term that encourages the gaussians to align well with the surface of the scene. We then introduce a method that exploits this alignment to extract a mesh from the Gaussians using Poisson reconstruction, which is fast, scalable, and preserves details, in contrast to the Marching Cubes algorithm usually applied to extract meshes from Neural SDFs. Finally, we introduce an optional refinement strategy that binds gaussians to the surface of the mesh, and jointly optimizes these Gaussians and the mesh through Gaussian splatting rendering. This enables easy editing, sculpting, rigging, animating, compositing and relighting of the Gaussians using traditional softwares by manipulating the mesh instead of the gaussians themselves. Retrieving such an editable mesh for realistic rendering is done within minutes with our method, compared to hours with the state-of-the-art methods on neural SDFs, while providing a better rendering quality. Our project page is the following: https://anttwo.github.io/sugar/", + "arxiv_url": "http://arxiv.org/abs/2311.12775v3", + "pdf_url": "http://arxiv.org/pdf/2311.12775v3", + "published_date": "2023-11-21", + "categories": [ + "cs.GR", + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "lighting", + "nerf", + "relighting", + "efficient", + "ar", + "face", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "A Compact Dynamic 3D Gaussian Representation for Real-Time Dynamic View Synthesis", + "authors": [ + "Kai Katsumata", + "Duc Minh Vo", + "Hideki Nakayama" + ], + "abstract": "3D Gaussian Splatting (3DGS) has shown remarkable success in synthesizing novel views given multiple views of a static scene. Yet, 3DGS faces challenges when applied to dynamic scenes because 3D Gaussian parameters need to be updated per timestep, requiring a large amount of memory and at least a dozen observations per timestep. To address these limitations, we present a compact dynamic 3D Gaussian representation that models positions and rotations as functions of time with a few parameter approximations while keeping other properties of 3DGS including scale, color and opacity invariant. Our method can dramatically reduce memory usage and relax a strict multi-view assumption. In our experiments on monocular and multi-view scenarios, we show that our method not only matches state-of-the-art methods, often linked with slower rendering speeds, in terms of high rendering quality but also significantly surpasses them by achieving a rendering speed of $118$ frames per second (FPS) at a resolution of 1,352$\\times$1,014 on a single GPU.", + "arxiv_url": "http://arxiv.org/abs/2311.12897v2", + "pdf_url": "http://arxiv.org/pdf/2311.12897v2", + "published_date": "2023-11-21", + "categories": [ + "cs.GR" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "gaussian splatting", + "ar", + "face", + "compact" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "PhysGaussian: Physics-Integrated 3D Gaussians for Generative Dynamics", + "authors": [ + "Tianyi Xie", + "Zeshun Zong", + "Yuxing Qiu", + "Xuan Li", + "Yutao Feng", + "Yin Yang", + "Chenfanfu Jiang" + ], + "abstract": "We introduce PhysGaussian, a new method that seamlessly integrates physically grounded Newtonian dynamics within 3D Gaussians to achieve high-quality novel motion synthesis. Employing a custom Material Point Method (MPM), our approach enriches 3D Gaussian kernels with physically meaningful kinematic deformation and mechanical stress attributes, all evolved in line with continuum mechanics principles. A defining characteristic of our method is the seamless integration between physical simulation and visual rendering: both components utilize the same 3D Gaussian kernels as their discrete representations. This negates the necessity for triangle/tetrahedron meshing, marching cubes, \"cage meshes,\" or any other geometry embedding, highlighting the principle of \"what you see is what you simulate (WS$^2$).\" Our method demonstrates exceptional versatility across a wide variety of materials--including elastic entities, metals, non-Newtonian fluids, and granular materials--showcasing its strong capabilities in creating diverse visual content with novel viewpoints and movements. Our project page is at: https://xpandora.github.io/PhysGaussian/", + "arxiv_url": "http://arxiv.org/abs/2311.12198v3", + "pdf_url": "http://arxiv.org/pdf/2311.12198v3", + "published_date": "2023-11-20", + "categories": [ + "cs.GR", + "cs.AI", + "cs.CV", + "cs.LG" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "motion", + "dynamic", + "lighting", + "deformation", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GS-SLAM: Dense Visual SLAM with 3D Gaussian Splatting", + "authors": [ + "Chi Yan", + "Delin Qu", + "Dan Xu", + "Bin Zhao", + "Zhigang Wang", + "Dong Wang", + "Xuelong Li" + ], + "abstract": "In this paper, we introduce \\textbf{GS-SLAM} that first utilizes 3D Gaussian representation in the Simultaneous Localization and Mapping (SLAM) system. It facilitates a better balance between efficiency and accuracy. Compared to recent SLAM methods employing neural implicit representations, our method utilizes a real-time differentiable splatting rendering pipeline that offers significant speedup to map optimization and RGB-D rendering. Specifically, we propose an adaptive expansion strategy that adds new or deletes noisy 3D Gaussians in order to efficiently reconstruct new observed scene geometry and improve the mapping of previously observed areas. This strategy is essential to extend 3D Gaussian representation to reconstruct the whole scene rather than synthesize a static object in existing methods. Moreover, in the pose tracking process, an effective coarse-to-fine technique is designed to select reliable 3D Gaussian representations to optimize camera pose, resulting in runtime reduction and robust estimation. Our method achieves competitive performance compared with existing state-of-the-art real-time methods on the Replica, TUM-RGBD datasets. Project page: https://gs-slam.github.io/.", + "arxiv_url": "http://arxiv.org/abs/2311.11700v4", + "pdf_url": "http://arxiv.org/pdf/2311.11700v4", + "published_date": "2023-11-20", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "slam", + "gaussian splatting", + "tracking", + "mapping", + "efficient", + "ar", + "localization" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "LucidDreamer: Towards High-Fidelity Text-to-3D Generation via Interval Score Matching", + "authors": [ + "Yixun Liang", + "Xin Yang", + "Jiantao Lin", + "Haodong Li", + "Xiaogang Xu", + "Yingcong Chen" + ], + "abstract": "The recent advancements in text-to-3D generation mark a significant milestone in generative models, unlocking new possibilities for creating imaginative 3D assets across various real-world scenarios. While recent advancements in text-to-3D generation have shown promise, they often fall short in rendering detailed and high-quality 3D models. This problem is especially prevalent as many methods base themselves on Score Distillation Sampling (SDS). This paper identifies a notable deficiency in SDS, that it brings inconsistent and low-quality updating direction for the 3D model, causing the over-smoothing effect. To address this, we propose a novel approach called Interval Score Matching (ISM). ISM employs deterministic diffusing trajectories and utilizes interval-based score matching to counteract over-smoothing. Furthermore, we incorporate 3D Gaussian Splatting into our text-to-3D generation pipeline. Extensive experiments show that our model largely outperforms the state-of-the-art in quality and training efficiency.", + "arxiv_url": "http://arxiv.org/abs/2311.11284v3", + "pdf_url": "http://arxiv.org/pdf/2311.11284v3", + "published_date": "2023-11-19", + "categories": [ + "cs.CV", + "cs.GR", + "cs.MM" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "gaussian splatting", + "high-fidelity" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussianDiffusion: 3D Gaussian Splatting for Denoising Diffusion Probabilistic Models with Structured Noise", + "authors": [ + "Xinhai Li", + "Huaibin Wang", + "Kuo-Kun Tseng" + ], + "abstract": "Text-to-3D, known for its efficient generation methods and expansive creative potential, has garnered significant attention in the AIGC domain. However, the pixel-wise rendering of NeRF and its ray marching light sampling constrain the rendering speed, impacting its utility in downstream industrial applications. Gaussian Splatting has recently shown a trend of replacing the traditional pointwise sampling technique commonly used in NeRF-based methodologies, and it is changing various aspects of 3D reconstruction. This paper introduces a novel text to 3D content generation framework, Gaussian Diffusion, based on Gaussian Splatting and produces more realistic renderings. The challenge of achieving multi-view consistency in 3D generation significantly impedes modeling complexity and accuracy. Taking inspiration from SJC, we explore employing multi-view noise distributions to perturb images generated by 3D Gaussian Splatting, aiming to rectify inconsistencies in multi-view geometry. We ingeniously devise an efficient method to generate noise that produces Gaussian noise from diverse viewpoints, all originating from a shared noise source. Furthermore, vanilla 3D Gaussian-based generation tends to trap models in local minima, causing artifacts like floaters, burrs, or proliferative elements. To mitigate these issues, we propose the variational Gaussian Splatting technique to enhance the quality and stability of 3D appearance. To our knowledge, our approach represents the first comprehensive utilization of Gaussian Diffusion across the entire spectrum of 3D content generation processes.", + "arxiv_url": "http://arxiv.org/abs/2311.11221v3", + "pdf_url": "http://arxiv.org/pdf/2311.11221v3", + "published_date": "2023-11-19", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "3d reconstruction", + "nerf", + "efficient", + "ar", + "ray marching" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "SplatArmor: Articulated Gaussian splatting for animatable humans from monocular RGB videos", + "authors": [ + "Rohit Jena", + "Ganesh Subramanian Iyer", + "Siddharth Choudhary", + "Brandon Smith", + "Pratik Chaudhari", + "James Gee" + ], + "abstract": "We propose SplatArmor, a novel approach for recovering detailed and animatable human models by `armoring' a parameterized body model with 3D Gaussians. Our approach represents the human as a set of 3D Gaussians within a canonical space, whose articulation is defined by extending the skinning of the underlying SMPL geometry to arbitrary locations in the canonical space. To account for pose-dependent effects, we introduce a SE(3) field, which allows us to capture both the location and anisotropy of the Gaussians. Furthermore, we propose the use of a neural color field to provide color regularization and 3D supervision for the precise positioning of these Gaussians. We show that Gaussian splatting provides an interesting alternative to neural rendering based methods by leverging a rasterization primitive without facing any of the non-differentiability and optimization challenges typically faced in such approaches. The rasterization paradigms allows us to leverage forward skinning, and does not suffer from the ambiguities associated with inverse skinning and warping. We show compelling results on the ZJU MoCap and People Snapshot datasets, which underscore the effectiveness of our method for controllable human synthesis.", + "arxiv_url": "http://arxiv.org/abs/2311.10812v1", + "pdf_url": "http://arxiv.org/pdf/2311.10812v1", + "published_date": "2023-11-17", + "categories": [ + "cs.CV", + "cs.GR", + "cs.LG" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "human", + "ar", + "face", + "body", + "neural rendering" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Compressed 3D Gaussian Splatting for Accelerated Novel View Synthesis", + "authors": [ + "Simon Niedermayr", + "Josef Stumpfegger", + "Rüdiger Westermann" + ], + "abstract": "Recently, high-fidelity scene reconstruction with an optimized 3D Gaussian splat representation has been introduced for novel view synthesis from sparse image sets. Making such representations suitable for applications like network streaming and rendering on low-power devices requires significantly reduced memory consumption as well as improved rendering efficiency. We propose a compressed 3D Gaussian splat representation that utilizes sensitivity-aware vector clustering with quantization-aware training to compress directional colors and Gaussian parameters. The learned codebooks have low bitrates and achieve a compression rate of up to $31\\times$ on real-world scenes with only minimal degradation of visual quality. We demonstrate that the compressed splat representation can be efficiently rendered with hardware rasterization on lightweight GPUs at up to $4\\times$ higher framerates than reported via an optimized GPU compute pipeline. Extensive experiments across multiple datasets demonstrate the robustness and rendering speed of the proposed approach.", + "arxiv_url": "http://arxiv.org/abs/2401.02436v2", + "pdf_url": "http://arxiv.org/pdf/2401.02436v2", + "published_date": "2023-11-17", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "high-fidelity", + "efficient", + "lightweight", + "ar", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Drivable 3D Gaussian Avatars", + "authors": [ + "Wojciech Zielonka", + "Timur Bagautdinov", + "Shunsuke Saito", + "Michael Zollhöfer", + "Justus Thies", + "Javier Romero" + ], + "abstract": "We present Drivable 3D Gaussian Avatars (D3GA), the first 3D controllable model for human bodies rendered with Gaussian splats. Current photorealistic drivable avatars require either accurate 3D registrations during training, dense input images during testing, or both. The ones based on neural radiance fields also tend to be prohibitively slow for telepresence applications. This work uses the recently presented 3D Gaussian Splatting (3DGS) technique to render realistic humans at real-time framerates, using dense calibrated multi-view videos as input. To deform those primitives, we depart from the commonly used point deformation method of linear blend skinning (LBS) and use a classic volumetric deformation method: cage deformations. Given their smaller size, we drive these deformations with joint angles and keypoints, which are more suitable for communication applications. Our experiments on nine subjects with varied body shapes, clothes, and motions obtain higher-quality results than state-of-the-art methods when using the same training and test data.", + "arxiv_url": "http://arxiv.org/abs/2311.08581v1", + "pdf_url": "http://arxiv.org/pdf/2311.08581v1", + "published_date": "2023-11-14", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "motion", + "3d gaussian", + "gaussian splatting", + "deformation", + "human", + "avatar", + "ar", + "body" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Dynamic Gaussian Splatting from Markerless Motion Capture can Reconstruct Infants Movements", + "authors": [ + "R. James Cotton", + "Colleen Peyton" + ], + "abstract": "Easy access to precise 3D tracking of movement could benefit many aspects of rehabilitation. A challenge to achieving this goal is that while there are many datasets and pretrained algorithms for able-bodied adults, algorithms trained on these datasets often fail to generalize to clinical populations including people with disabilities, infants, and neonates. Reliable movement analysis of infants and neonates is important as spontaneous movement behavior is an important indicator of neurological function and neurodevelopmental disability, which can help guide early interventions. We explored the application of dynamic Gaussian splatting to sparse markerless motion capture (MMC) data. Our approach leverages semantic segmentation masks to focus on the infant, significantly improving the initialization of the scene. Our results demonstrate the potential of this method in rendering novel views of scenes and tracking infant movements. This work paves the way for advanced movement analysis tools that can be applied to diverse clinical populations, with a particular emphasis on early detection in infants.", + "arxiv_url": "http://arxiv.org/abs/2310.19441v1", + "pdf_url": "http://arxiv.org/pdf/2310.19441v1", + "published_date": "2023-10-30", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "dynamic", + "motion", + "gaussian splatting", + "tracking", + "segmentation", + "ar", + "semantic" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Real-time Photorealistic Dynamic Scene Representation and Rendering with 4D Gaussian Splatting", + "authors": [ + "Zeyu Yang", + "Hongye Yang", + "Zijie Pan", + "Li Zhang" + ], + "abstract": "Reconstructing dynamic 3D scenes from 2D images and generating diverse views over time is challenging due to scene complexity and temporal dynamics. Despite advancements in neural implicit models, limitations persist: (i) Inadequate Scene Structure: Existing methods struggle to reveal the spatial and temporal structure of dynamic scenes from directly learning the complex 6D plenoptic function. (ii) Scaling Deformation Modeling: Explicitly modeling scene element deformation becomes impractical for complex dynamics. To address these issues, we consider the spacetime as an entirety and propose to approximate the underlying spatio-temporal 4D volume of a dynamic scene by optimizing a collection of 4D primitives, with explicit geometry and appearance modeling. Learning to optimize the 4D primitives enables us to synthesize novel views at any desired time with our tailored rendering routine. Our model is conceptually simple, consisting of a 4D Gaussian parameterized by anisotropic ellipses that can rotate arbitrarily in space and time, as well as view-dependent and time-evolved appearance represented by the coefficient of 4D spherindrical harmonics. This approach offers simplicity, flexibility for variable-length video and end-to-end training, and efficient real-time rendering, making it suitable for capturing complex dynamic scene motions. Experiments across various benchmarks, including monocular and multi-view scenarios, demonstrate our 4DGS model's superior visual quality and efficiency.", + "arxiv_url": "http://arxiv.org/abs/2310.10642v3", + "pdf_url": "http://arxiv.org/pdf/2310.10642v3", + "published_date": "2023-10-16", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "geometry", + "dynamic", + "motion", + "gaussian splatting", + "deformation", + "4d", + "efficient", + "ar", + "real-time rendering" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussianDreamer: Fast Generation from Text to 3D Gaussians by Bridging 2D and 3D Diffusion Models", + "authors": [ + "Taoran Yi", + "Jiemin Fang", + "Junjie Wang", + "Guanjun Wu", + "Lingxi Xie", + "Xiaopeng Zhang", + "Wenyu Liu", + "Qi Tian", + "Xinggang Wang" + ], + "abstract": "In recent times, the generation of 3D assets from text prompts has shown impressive results. Both 2D and 3D diffusion models can help generate decent 3D objects based on prompts. 3D diffusion models have good 3D consistency, but their quality and generalization are limited as trainable 3D data is expensive and hard to obtain. 2D diffusion models enjoy strong abilities of generalization and fine generation, but 3D consistency is hard to guarantee. This paper attempts to bridge the power from the two types of diffusion models via the recent explicit and efficient 3D Gaussian splatting representation. A fast 3D object generation framework, named as GaussianDreamer, is proposed, where the 3D diffusion model provides priors for initialization and the 2D diffusion model enriches the geometry and appearance. Operations of noisy point growing and color perturbation are introduced to enhance the initialized Gaussians. Our GaussianDreamer can generate a high-quality 3D instance or 3D avatar within 15 minutes on one GPU, much faster than previous methods, while the generated instances can be directly rendered in real time. Demos and code are available at https://taoranyi.com/gaussiandreamer/.", + "arxiv_url": "http://arxiv.org/abs/2310.08529v3", + "pdf_url": "http://arxiv.org/pdf/2310.08529v3", + "published_date": "2023-10-12", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "gaussian splatting", + "avatar", + "efficient", + "ar", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "4D Gaussian Splatting for Real-Time Dynamic Scene Rendering", + "authors": [ + "Guanjun Wu", + "Taoran Yi", + "Jiemin Fang", + "Lingxi Xie", + "Xiaopeng Zhang", + "Wei Wei", + "Wenyu Liu", + "Qi Tian", + "Xinggang Wang" + ], + "abstract": "Representing and rendering dynamic scenes has been an important but challenging task. Especially, to accurately model complex motions, high efficiency is usually hard to guarantee. To achieve real-time dynamic scene rendering while also enjoying high training and storage efficiency, we propose 4D Gaussian Splatting (4D-GS) as a holistic representation for dynamic scenes rather than applying 3D-GS for each individual frame. In 4D-GS, a novel explicit representation containing both 3D Gaussians and 4D neural voxels is proposed. A decomposed neural voxel encoding algorithm inspired by HexPlane is proposed to efficiently build Gaussian features from 4D neural voxels and then a lightweight MLP is applied to predict Gaussian deformations at novel timestamps. Our 4D-GS method achieves real-time rendering under high resolutions, 82 FPS at an 800$\\times$800 resolution on an RTX 3090 GPU while maintaining comparable or better quality than previous state-of-the-art methods. More demos and code are available at https://guanjunwu.github.io/4dgs/.", + "arxiv_url": "http://arxiv.org/abs/2310.08528v3", + "pdf_url": "http://arxiv.org/pdf/2310.08528v3", + "published_date": "2023-10-12", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "motion", + "gaussian splatting", + "deformation", + "4d", + "efficient", + "lightweight", + "ar", + "real-time rendering" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "DreamGaussian: Generative Gaussian Splatting for Efficient 3D Content Creation", + "authors": [ + "Jiaxiang Tang", + "Jiawei Ren", + "Hang Zhou", + "Ziwei Liu", + "Gang Zeng" + ], + "abstract": "Recent advances in 3D content creation mostly leverage optimization-based 3D generation via score distillation sampling (SDS). Though promising results have been exhibited, these methods often suffer from slow per-sample optimization, limiting their practical usage. In this paper, we propose DreamGaussian, a novel 3D content generation framework that achieves both efficiency and quality simultaneously. Our key insight is to design a generative 3D Gaussian Splatting model with companioned mesh extraction and texture refinement in UV space. In contrast to the occupancy pruning used in Neural Radiance Fields, we demonstrate that the progressive densification of 3D Gaussians converges significantly faster for 3D generative tasks. To further enhance the texture quality and facilitate downstream applications, we introduce an efficient algorithm to convert 3D Gaussians into textured meshes and apply a fine-tuning stage to refine the details. Extensive experiments demonstrate the superior efficiency and competitive generation quality of our proposed approach. Notably, DreamGaussian produces high-quality textured meshes in just 2 minutes from a single-view image, achieving approximately 10 times acceleration compared to existing methods.", + "arxiv_url": "http://arxiv.org/abs/2309.16653v2", + "pdf_url": "http://arxiv.org/pdf/2309.16653v2", + "published_date": "2023-09-28", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "acceleration", + "efficient", + "ar", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Text-to-3D using Gaussian Splatting", + "authors": [ + "Zilong Chen", + "Feng Wang", + "Yikai Wang", + "Huaping Liu" + ], + "abstract": "Automatic text-to-3D generation that combines Score Distillation Sampling (SDS) with the optimization of volume rendering has achieved remarkable progress in synthesizing realistic 3D objects. Yet most existing text-to-3D methods by SDS and volume rendering suffer from inaccurate geometry, e.g., the Janus issue, since it is hard to explicitly integrate 3D priors into implicit 3D representations. Besides, it is usually time-consuming for them to generate elaborate 3D models with rich colors. In response, this paper proposes GSGEN, a novel method that adopts Gaussian Splatting, a recent state-of-the-art representation, to text-to-3D generation. GSGEN aims at generating high-quality 3D objects and addressing existing shortcomings by exploiting the explicit nature of Gaussian Splatting that enables the incorporation of 3D prior. Specifically, our method adopts a progressive optimization strategy, which includes a geometry optimization stage and an appearance refinement stage. In geometry optimization, a coarse representation is established under 3D point cloud diffusion prior along with the ordinary 2D SDS optimization, ensuring a sensible and 3D-consistent rough shape. Subsequently, the obtained Gaussians undergo an iterative appearance refinement to enrich texture details. In this stage, we increase the number of Gaussians by compactness-based densification to enhance continuity and improve fidelity. With these designs, our approach can generate 3D assets with delicate details and accurate geometry. Extensive evaluations demonstrate the effectiveness of our method, especially for capturing high-frequency components. Our code is available at https://github.com/gsgen3d/gsgen", + "arxiv_url": "http://arxiv.org/abs/2309.16585v4", + "pdf_url": "http://arxiv.org/pdf/2309.16585v4", + "published_date": "2023-09-28", + "categories": [ + "cs.CV" + ], + "github_url": "https://github.com/gsgen3d/gsgen", + "keywords": [ + "compact", + "geometry", + "ar", + "gaussian splatting" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Deformable 3D Gaussians for High-Fidelity Monocular Dynamic Scene Reconstruction", + "authors": [ + "Ziyi Yang", + "Xinyu Gao", + "Wen Zhou", + "Shaohui Jiao", + "Yuqing Zhang", + "Xiaogang Jin" + ], + "abstract": "Implicit neural representation has paved the way for new approaches to dynamic scene reconstruction and rendering. Nonetheless, cutting-edge dynamic neural rendering methods rely heavily on these implicit representations, which frequently struggle to capture the intricate details of objects in the scene. Furthermore, implicit methods have difficulty achieving real-time rendering in general dynamic scenes, limiting their use in a variety of tasks. To address the issues, we propose a deformable 3D Gaussians Splatting method that reconstructs scenes using 3D Gaussians and learns them in canonical space with a deformation field to model monocular dynamic scenes. We also introduce an annealing smoothing training mechanism with no extra overhead, which can mitigate the impact of inaccurate poses on the smoothness of time interpolation tasks in real-world datasets. Through a differential Gaussian rasterizer, the deformable 3D Gaussians not only achieve higher rendering quality but also real-time rendering speed. Experiments show that our method outperforms existing methods significantly in terms of both rendering quality and speed, making it well-suited for tasks such as novel-view synthesis, time interpolation, and real-time rendering.", + "arxiv_url": "http://arxiv.org/abs/2309.13101v2", + "pdf_url": "http://arxiv.org/pdf/2309.13101v2", + "published_date": "2023-09-22", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "head", + "high-fidelity", + "deformation", + "ar", + "real-time rendering", + "neural rendering" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Flexible Techniques for Differentiable Rendering with 3D Gaussians", + "authors": [ + "Leonid Keselman", + "Martial Hebert" + ], + "abstract": "Fast, reliable shape reconstruction is an essential ingredient in many computer vision applications. Neural Radiance Fields demonstrated that photorealistic novel view synthesis is within reach, but was gated by performance requirements for fast reconstruction of real scenes and objects. Several recent approaches have built on alternative shape representations, in particular, 3D Gaussians. We develop extensions to these renderers, such as integrating differentiable optical flow, exporting watertight meshes and rendering per-ray normals. Additionally, we show how two of the recent methods are interoperable with each other. These reconstructions are quick, robust, and easily performed on GPU or CPU. For code and visual examples, see https://leonidk.github.io/fmb-plus", + "arxiv_url": "http://arxiv.org/abs/2308.14737v1", + "pdf_url": "http://arxiv.org/pdf/2308.14737v1", + "published_date": "2023-08-28", + "categories": [ + "cs.CV", + "cs.AI", + "cs.GR", + "I.2.10; I.3.7; I.4.0" + ], + "github_url": "", + "keywords": [ + "fast", + "3d gaussian", + "ar", + "shape reconstruction" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Dynamic 3D Gaussians: Tracking by Persistent Dynamic View Synthesis", + "authors": [ + "Jonathon Luiten", + "Georgios Kopanas", + "Bastian Leibe", + "Deva Ramanan" + ], + "abstract": "We present a method that simultaneously addresses the tasks of dynamic scene novel-view synthesis and six degree-of-freedom (6-DOF) tracking of all dense scene elements. We follow an analysis-by-synthesis framework, inspired by recent work that models scenes as a collection of 3D Gaussians which are optimized to reconstruct input images via differentiable rendering. To model dynamic scenes, we allow Gaussians to move and rotate over time while enforcing that they have persistent color, opacity, and size. By regularizing Gaussians' motion and rotation with local-rigidity constraints, we show that our Dynamic 3D Gaussians correctly model the same area of physical space over time, including the rotation of that space. Dense 6-DOF tracking and dynamic reconstruction emerges naturally from persistent dynamic view synthesis, without requiring any correspondence or flow as input. We demonstrate a large number of downstream applications enabled by our representation, including first-person view synthesis, dynamic compositional scene synthesis, and 4D video editing.", + "arxiv_url": "http://arxiv.org/abs/2308.09713v1", + "pdf_url": "http://arxiv.org/pdf/2308.09713v1", + "published_date": "2023-08-18", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "motion", + "tracking", + "4d", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "3D Gaussian Splatting for Real-Time Radiance Field Rendering", + "authors": [ + "Bernhard Kerbl", + "Georgios Kopanas", + "Thomas Leimkühler", + "George Drettakis" + ], + "abstract": "Radiance Field methods have recently revolutionized novel-view synthesis of scenes captured with multiple photos or videos. However, achieving high visual quality still requires neural networks that are costly to train and render, while recent faster methods inevitably trade off speed for quality. For unbounded and complete scenes (rather than isolated objects) and 1080p resolution rendering, no current method can achieve real-time display rates. We introduce three key elements that allow us to achieve state-of-the-art visual quality while maintaining competitive training times and importantly allow high-quality real-time (>= 30 fps) novel-view synthesis at 1080p resolution. First, starting from sparse points produced during camera calibration, we represent the scene with 3D Gaussians that preserve desirable properties of continuous volumetric radiance fields for scene optimization while avoiding unnecessary computation in empty space; Second, we perform interleaved optimization/density control of the 3D Gaussians, notably optimizing anisotropic covariance to achieve an accurate representation of the scene; Third, we develop a fast visibility-aware rendering algorithm that supports anisotropic splatting and both accelerates training and allows realtime rendering. We demonstrate state-of-the-art visual quality and real-time rendering on several established datasets.", + "arxiv_url": "http://arxiv.org/abs/2308.04079v1", + "pdf_url": "http://arxiv.org/pdf/2308.04079v1", + "published_date": "2023-08-08", + "categories": [ + "cs.GR", + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "ar", + "real-time rendering", + "fast" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Decoherence in Neutrino Oscillation between 3D Gaussian Wave Packets", + "authors": [ + "Haruhi Mitani", + "Kin-ya Oda" + ], + "abstract": "There is renewed attention to whether we can observe the decoherence effect in neutrino oscillation due to the separation of wave packets with different masses in near-future experiments. As a contribution to this endeavor, we extend the existing formulation based on a single 1D Gaussian wave function to an amplitude between two distinct 3D Gaussian wave packets, corresponding to the neutrinos being produced and detected, with different central momenta and spacetime positions and with different widths. We find that the spatial widths-squared for the production and detection appear additively in the (de)coherence length and in the localization factor for governing the propagation of the wave packet, whereas they appear as the reduced one (inverse of the sum of inverse) in the momentum conservation factor. The overall probability is governed by the ratio of the reduced to the sum.", + "arxiv_url": "http://arxiv.org/abs/2307.12230v2", + "pdf_url": "http://arxiv.org/pdf/2307.12230v2", + "published_date": "2023-07-23", + "categories": [ + "hep-ph", + "hep-th" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "localization" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "NEAT: Distilling 3D Wireframes from Neural Attraction Fields", + "authors": [ + "Nan Xue", + "Bin Tan", + "Yuxi Xiao", + "Liang Dong", + "Gui-Song Xia", + "Tianfu Wu", + "Yujun Shen" + ], + "abstract": "This paper studies the problem of structured 3D reconstruction using wireframes that consist of line segments and junctions, focusing on the computation of structured boundary geometries of scenes. Instead of leveraging matching-based solutions from 2D wireframes (or line segments) for 3D wireframe reconstruction as done in prior arts, we present NEAT, a rendering-distilling formulation using neural fields to represent 3D line segments with 2D observations, and bipartite matching for perceiving and distilling of a sparse set of 3D global junctions. The proposed {NEAT} enjoys the joint optimization of the neural fields and the global junctions from scratch, using view-dependent 2D observations without precomputed cross-view feature matching. Comprehensive experiments on the DTU and BlendedMVS datasets demonstrate our NEAT's superiority over state-of-the-art alternatives for 3D wireframe reconstruction. Moreover, the distilled 3D global junctions by NEAT, are a better initialization than SfM points, for the recently-emerged 3D Gaussian Splatting for high-fidelity novel view synthesis using about 20 times fewer initial 3D points. Project page: \\url{https://xuenan.net/neat}.", + "arxiv_url": "http://arxiv.org/abs/2307.10206v2", + "pdf_url": "http://arxiv.org/pdf/2307.10206v2", + "published_date": "2023-07-14", + "categories": [ + "cs.CV", + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "high-fidelity", + "3d reconstruction", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "DiViNeT: 3D Reconstruction from Disparate Views via Neural Template Regularization", + "authors": [ + "Aditya Vora", + "Akshay Gadi Patil", + "Hao Zhang" + ], + "abstract": "We present a volume rendering-based neural surface reconstruction method that takes as few as three disparate RGB images as input. Our key idea is to regularize the reconstruction, which is severely ill-posed and leaving significant gaps between the sparse views, by learning a set of neural templates to act as surface priors. Our method, coined DiViNet, operates in two stages. It first learns the templates, in the form of 3D Gaussian functions, across different scenes, without 3D supervision. In the reconstruction stage, our predicted templates serve as anchors to help \"stitch'' the surfaces over sparse regions. We demonstrate that our approach is not only able to complete the surface geometry but also reconstructs surface details to a reasonable extent from a few disparate input views. On the DTU and BlendedMVS datasets, our approach achieves the best reconstruction quality among existing methods in the presence of such sparse views and performs on par, if not better, with competing methods when dense views are employed as inputs.", + "arxiv_url": "http://arxiv.org/abs/2306.04699v4", + "pdf_url": "http://arxiv.org/pdf/2306.04699v4", + "published_date": "2023-06-07", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "sparse view", + "geometry", + "3d gaussian", + "3d reconstruction", + "ar", + "face" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Control4D: Efficient 4D Portrait Editing with Text", + "authors": [ + "Ruizhi Shao", + "Jingxiang Sun", + "Cheng Peng", + "Zerong Zheng", + "Boyao Zhou", + "Hongwen Zhang", + "Yebin Liu" + ], + "abstract": "We introduce Control4D, an innovative framework for editing dynamic 4D portraits using text instructions. Our method addresses the prevalent challenges in 4D editing, notably the inefficiencies of existing 4D representations and the inconsistent editing effect caused by diffusion-based editors. We first propose GaussianPlanes, a novel 4D representation that makes Gaussian Splatting more structured by applying plane-based decomposition in 3D space and time. This enhances both efficiency and robustness in 4D editing. Furthermore, we propose to leverage a 4D generator to learn a more continuous generation space from inconsistent edited images produced by the diffusion-based editor, which effectively improves the consistency and quality of 4D editing. Comprehensive evaluation demonstrates the superiority of Control4D, including significantly reduced training time, high-quality rendering, and spatial-temporal consistency in 4D portrait editing. The link to our project website is https://control4darxiv.github.io.", + "arxiv_url": "http://arxiv.org/abs/2305.20082v2", + "pdf_url": "http://arxiv.org/pdf/2305.20082v2", + "published_date": "2023-05-31", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "dynamic", + "gaussian splatting", + "4d", + "efficient", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Deceptive-NeRF/3DGS: Diffusion-Generated Pseudo-Observations for High-Quality Sparse-View Reconstruction", + "authors": [ + "Xinhang Liu", + "Jiaben Chen", + "Shiu-hong Kao", + "Yu-Wing Tai", + "Chi-Keung Tang" + ], + "abstract": "Novel view synthesis via Neural Radiance Fields (NeRFs) or 3D Gaussian Splatting (3DGS) typically necessitates dense observations with hundreds of input images to circumvent artifacts. We introduce Deceptive-NeRF/3DGS to enhance sparse-view reconstruction with only a limited set of input images, by leveraging a diffusion model pre-trained from multiview datasets. Different from using diffusion priors to regularize representation optimization, our method directly uses diffusion-generated images to train NeRF/3DGS as if they were real input views. Specifically, we propose a deceptive diffusion model turning noisy images rendered from few-view reconstructions into high-quality photorealistic pseudo-observations. To resolve consistency among pseudo-observations and real input views, we develop an uncertainty measure to guide the diffusion model's generation. Our system progressively incorporates diffusion-generated pseudo-observations into the training image sets, ultimately densifying the sparse input observations by 5 to 10 times. Extensive experiments across diverse and challenging datasets validate that our approach outperforms existing state-of-the-art methods and is capable of synthesizing novel views with super-resolution in the few-view setting.", + "arxiv_url": "http://arxiv.org/abs/2305.15171v4", + "pdf_url": "http://arxiv.org/pdf/2305.15171v4", + "published_date": "2023-05-24", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "gaussian splatting", + "nerf", + "ar", + "sparse-view" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "NOVUM: Neural Object Volumes for Robust Object Classification", + "authors": [ + "Artur Jesslen", + "Guofeng Zhang", + "Angtian Wang", + "Wufei Ma", + "Alan Yuille", + "Adam Kortylewski" + ], + "abstract": "Discriminative models for object classification typically learn image-based representations that do not capture the compositional and 3D nature of objects. In this work, we show that explicitly integrating 3D compositional object representations into deep networks for image classification leads to a largely enhanced generalization in out-of-distribution scenarios. In particular, we introduce a novel architecture, referred to as NOVUM, that consists of a feature extractor and a neural object volume for every target object class. Each neural object volume is a composition of 3D Gaussians that emit feature vectors. This compositional object representation allows for a highly robust and fast estimation of the object class by independently matching the features of the 3D Gaussians of each category to features extracted from an input image. Additionally, the object pose can be estimated via inverse rendering of the corresponding neural object volume. To enable the classification of objects, the neural features at each 3D Gaussian are trained discriminatively to be distinct from (i) the features of 3D Gaussians in other categories, (ii) features of other 3D Gaussians of the same object, and (iii) the background features. Our experiments show that NOVUM offers intriguing advantages over standard architectures due to the 3D compositional structure of the object representation, namely: (1) An exceptional robustness across a spectrum of real-world and synthetic out-of-distribution shifts and (2) an enhanced human interpretability compared to standard models, all while maintaining real-time inference and a competitive accuracy on in-distribution data.", + "arxiv_url": "http://arxiv.org/abs/2305.14668v4", + "pdf_url": "http://arxiv.org/pdf/2305.14668v4", + "published_date": "2023-05-24", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "human", + "fast", + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Generation of artificial facial drug abuse images using Deep De-identified anonymous Dataset augmentation through Genetics Algorithm (3DG-GA)", + "authors": [ + "Hazem Zein", + "Lou Laurent", + "Régis Fournier", + "Amine Nait-Ali" + ], + "abstract": "In biomedical research and artificial intelligence, access to large, well-balanced, and representative datasets is crucial for developing trustworthy applications that can be used in real-world scenarios. However, obtaining such datasets can be challenging, as they are often restricted to hospitals and specialized facilities. To address this issue, the study proposes to generate highly realistic synthetic faces exhibiting drug abuse traits through augmentation. The proposed method, called \"3DG-GA\", Deep De-identified anonymous Dataset Generation, uses Genetics Algorithm as a strategy for synthetic faces generation. The algorithm includes GAN artificial face generation, forgery detection, and face recognition. Initially, a dataset of 120 images of actual facial drug abuse is used. By preserving, the drug traits, the 3DG-GA provides a dataset containing 3000 synthetic facial drug abuse images. The dataset will be open to the scientific community, which can reproduce our results and benefit from the generated datasets while avoiding legal or ethical restrictions.", + "arxiv_url": "http://arxiv.org/abs/2304.06106v1", + "pdf_url": "http://arxiv.org/pdf/2304.06106v1", + "published_date": "2023-04-12", + "categories": [ + "cs.CV", + "cs.AI" + ], + "github_url": "", + "keywords": [ + "medical", + "recognition", + "face", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Quantitative perfusion and water transport time model from multi b-value diffusion magnetic resonance imaging validated against neutron capture microspheres", + "authors": [ + "M. Liu", + "N. Saadat", + "Y. Jeong", + "S. Roth", + "M. Niekrasz", + "M. Giurcanu", + "T. Carroll", + "G. Christoforidis" + ], + "abstract": "Intravoxel Incoherent Motion (IVIM) is a non-contrast magnetic resonance imaging diffusion-based scan that uses a multitude of b-values to measure various speeds of molecular perfusion and diffusion, sidestepping inaccuracy of arterial input functions or bolus kinetics in quantitative imaging. We test a new method of IVIM quantification and compare our values to reference standard neutron capture microspheres across normocapnia, CO2 induced hypercapnia, and middle cerebral artery occlusion in a controlled animal model. Perfusion quantification in ml/100g/min compared to microsphere perfusion uses the 3D gaussian probability distribution and defined water transport time as when 50% of the molecules remain in the tissue of interest. Perfusion, water transport time, and infarct volume was compared to reference standards. Simulations were studied to suppress non-specific cerebrospinal fluid (CSF). Linear regression analysis of quantitative perfusion returned correlation (slope = .55, intercept = 52.5, $R^2$= .64). Linear regression for water transport time asymmetry in infarcted tissue was excellent (slope = .59, intercept = .3, $R^2$ = .93). Strong linear agreement also was found for infarct volume (slope = 1.01, $R^2$= .79). Simulation of CSF suppression via inversion recovery returned blood signal reduced by 82% from combined T1 and T2 effects. Intra-physiologic state comparison of perfusion shows potential partial volume effects which require further study especially in disease states. The accuracy and sensitivity of IVIM provides evidence that observed signal changes reflect cytotoxic edema and tissue perfusion. Partial volume contamination of CSF may be better removed during post-processing rather than with inversion recovery to avoid artificial loss of blood signal.", + "arxiv_url": "http://arxiv.org/abs/2304.01888v1", + "pdf_url": "http://arxiv.org/pdf/2304.01888v1", + "published_date": "2023-04-04", + "categories": [ + "physics.med-ph", + "eess.IV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "motion" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Light-Weight Pointcloud Representation with Sparse Gaussian Process", + "authors": [ + "Mahmoud Ali", + "Lantao Liu" + ], + "abstract": "This paper presents a framework to represent high-fidelity pointcloud sensor observations for efficient communication and storage. The proposed approach exploits Sparse Gaussian Process to encode pointcloud into a compact form. Our approach represents both the free space and the occupied space using only one model (one 2D Sparse Gaussian Process) instead of the existing two-model framework (two 3D Gaussian Mixture Models). We achieve this by proposing a variance-based sampling technique that effectively discriminates between the free and occupied space. The new representation requires less memory footprint and can be transmitted across limitedbandwidth communication channels. The framework is extensively evaluated in simulation and it is also demonstrated using a real mobile robot equipped with a 3D LiDAR. Our method results in a 70 to 100 times reduction in the communication rate compared to sending the raw pointcloud.", + "arxiv_url": "http://arxiv.org/abs/2301.11251v1", + "pdf_url": "http://arxiv.org/pdf/2301.11251v1", + "published_date": "2023-01-26", + "categories": [ + "cs.RO" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "high-fidelity", + "ar", + "efficient", + "compact" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "FedGS: Federated Graph-based Sampling with Arbitrary Client Availability", + "authors": [ + "Zheng Wang", + "Xiaoliang Fan", + "Jianzhong Qi", + "Haibing Jin", + "Peizhen Yang", + "Siqi Shen", + "Cheng Wang" + ], + "abstract": "While federated learning has shown strong results in optimizing a machine learning model without direct access to the original data, its performance may be hindered by intermittent client availability which slows down the convergence and biases the final learned model. There are significant challenges to achieve both stable and bias-free training under arbitrary client availability. To address these challenges, we propose a framework named Federated Graph-based Sampling (FedGS), to stabilize the global model update and mitigate the long-term bias given arbitrary client availability simultaneously. First, we model the data correlations of clients with a Data-Distribution-Dependency Graph (3DG) that helps keep the sampled clients data apart from each other, which is theoretically shown to improve the approximation to the optimal model update. Second, constrained by the far-distance in data distribution of the sampled clients, we further minimize the variance of the numbers of times that the clients are sampled, to mitigate long-term bias. To validate the effectiveness of FedGS, we conduct experiments on three datasets under a comprehensive set of seven client availability modes. Our experimental results confirm FedGS's advantage in both enabling a fair client-sampling scheme and improving the model performance under arbitrary client availability. Our code is available at \\url{https://github.com/WwZzz/FedGS}.", + "arxiv_url": "http://arxiv.org/abs/2211.13975v3", + "pdf_url": "http://arxiv.org/pdf/2211.13975v3", + "published_date": "2022-11-25", + "categories": [ + "cs.LG" + ], + "github_url": "https://github.com/WwZzz/FedGS", + "keywords": [ + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "3DG-STFM: 3D Geometric Guided Student-Teacher Feature Matching", + "authors": [ + "Runyu Mao", + "Chen Bai", + "Yatong An", + "Fengqing Zhu", + "Cheng Lu" + ], + "abstract": "We tackle the essential task of finding dense visual correspondences between a pair of images. This is a challenging problem due to various factors such as poor texture, repetitive patterns, illumination variation, and motion blur in practical scenarios. In contrast to methods that use dense correspondence ground-truths as direct supervision for local feature matching training, we train 3DG-STFM: a multi-modal matching model (Teacher) to enforce the depth consistency under 3D dense correspondence supervision and transfer the knowledge to 2D unimodal matching model (Student). Both teacher and student models consist of two transformer-based matching modules that obtain dense correspondences in a coarse-to-fine manner. The teacher model guides the student model to learn RGB-induced depth information for the matching purpose on both coarse and fine branches. We also evaluate 3DG-STFM on a model compression task. To the best of our knowledge, 3DG-STFM is the first student-teacher learning method for the local feature matching task. The experiments show that our method outperforms state-of-the-art methods on indoor and outdoor camera pose estimations, and homography estimation problems. Code is available at: https://github.com/Ryan-prime/3DG-STFM.", + "arxiv_url": "http://arxiv.org/abs/2207.02375v2", + "pdf_url": "http://arxiv.org/pdf/2207.02375v2", + "published_date": "2022-07-06", + "categories": [ + "cs.CV" + ], + "github_url": "https://github.com/Ryan-prime/3DG-STFM", + "keywords": [ + "motion", + "outdoor", + "ar", + "illumination", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Contour Generation with Realistic Inter-observer Variation", + "authors": [ + "Eliana Vásquez Osorio", + "Jane Shortall", + "Jennifer Robbins", + "Marcel van Herk" + ], + "abstract": "Contours are used in radiotherapy treatment planning to identify regions to be irradiated with high dose and regions to be spared. Therefore, any contouring uncertainty influences the whole treatment. Even though this is the biggest remaining source of uncertainty when daily IGRT or adaptation is used, it has not been accounted for quantitatively in treatment planning. Using probabilistic planning allows to directly account for contouring uncertainties in plan optimisation. The first step is to create an algorithm that can generate many realistic contours with variation matching actual inter-observer variation. We propose a methodology to generate random contours, based on measured spatial inter-observer variation, IOV, and a single parameter that controls its geometrical dependency: alpha, the width of the 3D Gaussian used as point spread function (PSF). We used a level set formulation of the median shape, with the level set function defined as the signed distance transform. To create a new contour, we added the median level set and a noise map which was weighted with the IOV map and then convolved with the PSF. Thresholding the level set function reconstructs the newly generated contour. We used data from 18 patients from the golden atlas, consisting of five prostate delineations on T2-w MRI scans. To evaluate the similarity between the contours, we calculated the maximum distance to agreement to the median shape (maxDTA), and the minimum dose of the contours using an ideal dose distribution. We used the two-sample Kolmogorov-Smirnov test to compare the distributions for maxDTA and minDose between the generated and manually delineated contours. Only alpha=0.75cm produced maxDTA and minDose distributions that were not significantly different from the manually delineated structures. Accounting for the PSF is essential to correctly simulate inter-observer variation.", + "arxiv_url": "http://arxiv.org/abs/2204.10098v1", + "pdf_url": "http://arxiv.org/pdf/2204.10098v1", + "published_date": "2022-04-21", + "categories": [ + "physics.med-ph" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "The Sloan Digital Sky Survey Peculiar Velocity Catalogue", + "authors": [ + "Cullan Howlett", + "Khaled Said", + "John R. Lucey", + "Matthew Colless", + "Fei Qin", + "Yan Lai", + "R. Brent Tully", + "Tamara M. Davis" + ], + "abstract": "We present a new catalogue of distances and peculiar velocities (PVs) of $34,059$ early-type galaxies derived from Fundamental Plane (FP) measurements using data from the Sloan Digital Sky Survey (SDSS). This $7016\\,\\mathrm{deg}^{2}$ homogeneous sample comprises the largest set of peculiar velocities produced to date and extends the reach of PV surveys up to a redshift limit of $z=0.1$. Our SDSS-based FP distance measurements have a mean uncertainty of 23%. Alongside the data, we produce an ensemble of 2,048 mock galaxy catalogues that reproduce the data selection function, and are used to validate our fitting pipelines and check for systematic errors. We uncover a significant trend between group richness and mean surface brightness within the sample, which may hint at an environmental dependence within the FP or the presence of unresolved systematics, and can result in biased peculiar velocities. This is removed using multiple FP fits as function of group richness, a procedure made tractable through a new analytic derivation for the integral of a 3D Gaussian over non-trivial limits. Our catalogue is calibrated to the zero-point of the CosmicFlows-III sample with an uncertainty of $0.004$ dex (not including cosmic variance or the error within CosmicFlows-III itself), which is validated using independent cross-checks with the predicted zero-point from the 2M++ reconstruction of our local velocity field. Finally, as an example of what is possible with our new catalogue, we obtain preliminary bulk flow measurements up to a depth of $135\\,h^{-1}\\mathrm{Mpc}$. We find a slightly larger-than-expected bulk flow at high redshift, although this could be caused by the presence of the Shapley supercluster which lies outside the SDSS PV footprint.", + "arxiv_url": "http://arxiv.org/abs/2201.03112v2", + "pdf_url": "http://arxiv.org/pdf/2201.03112v2", + "published_date": "2022-01-09", + "categories": [ + "astro-ph.CO", + "astro-ph.GA" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "survey", + "face" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "The Yang-Mills heat flow with random distributional initial data", + "authors": [ + "Sky Cao", + "Sourav Chatterjee" + ], + "abstract": "We construct local solutions to the Yang-Mills heat flow (in the DeTurck gauge) for a certain class of random distributional initial data, which includes the 3D Gaussian free field. The main idea, which goes back to work of Bourgain as well as work of Da Prato-Debussche, is to decompose the solution into a rougher linear part and a smoother nonlinear part, and to control the latter by probabilistic arguments. In a companion work, we use the main results of this paper to propose a way towards the construction of 3D Yang-Mills measures.", + "arxiv_url": "http://arxiv.org/abs/2111.10652v4", + "pdf_url": "http://arxiv.org/pdf/2111.10652v4", + "published_date": "2021-11-20", + "categories": [ + "math.PR", + "hep-th", + "math-ph", + "math.AP", + "math.MP", + "35R60, 35A01, 60G60, 81T13" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Topology and geometry of Gaussian random fields II: on critical points, excursion sets, and persistent homology", + "authors": [ + "Pratyush Pranav" + ], + "abstract": "This paper is second in the series, following Pranav et al. (2019), focused on the characterization of geometric and topological properties of 3D Gaussian random fields. We focus on the formalism of persistent homology, the mainstay of Topological Data Analysis (TDA), in the context of excursion set formalism. We also focus on the structure of critical points of stochastic fields, and their relationship with formation and evolution of structures in the universe. The topological background is accompanied by an investigation of Gaussian field simulations based on the LCDM spectrum, as well as power-law spectra with varying spectral indices. We present the statistical properties in terms of the intensity and difference maps constructed from the persistence diagrams, as well as their distribution functions. We demonstrate that the intensity maps encapsulate information about the distribution of power across the hierarchies of structures in more detailed than the Betti numbers or the Euler characteristic. In particular, the white noise ($n = 0$) case with flat spectrum stands out as the divide between models with positive and negative spectral index. It has the highest proportion of low significance features. This level of information is not available from the geometric Minkowski functionals or the topological Euler characteristic, or even the Betti numbers, and demonstrates the usefulness of hierarchical topological methods. Another important result is the observation that topological characteristics of Gaussian fields depend on the power spectrum, as opposed to the geometric measures that are insensitive to the power spectrum characteristics.", + "arxiv_url": "http://arxiv.org/abs/2109.08721v1", + "pdf_url": "http://arxiv.org/pdf/2109.08721v1", + "published_date": "2021-09-17", + "categories": [ + "astro-ph.CO", + "math.AT" + ], + "github_url": "", + "keywords": [ + "geometry", + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "GaussiGAN: Controllable Image Synthesis with 3D Gaussians from Unposed Silhouettes", + "authors": [ + "Youssef A. Mejjati", + "Isa Milefchik", + "Aaron Gokaslan", + "Oliver Wang", + "Kwang In Kim", + "James Tompkin" + ], + "abstract": "We present an algorithm that learns a coarse 3D representation of objects from unposed multi-view 2D mask supervision, then uses it to generate detailed mask and image texture. In contrast to existing voxel-based methods for unposed object reconstruction, our approach learns to represent the generated shape and pose with a set of self-supervised canonical 3D anisotropic Gaussians via a perspective camera, and a set of per-image transforms. We show that this approach can robustly estimate a 3D space for the camera and object, while recent baselines sometimes struggle to reconstruct coherent 3D spaces in this setting. We show results on synthetic datasets with realistic lighting, and demonstrate object insertion with interactive posing. With our work, we help move towards structured representations that handle more real-world variation in learning-based object reconstruction.", + "arxiv_url": "http://arxiv.org/abs/2106.13215v1", + "pdf_url": "http://arxiv.org/pdf/2106.13215v1", + "published_date": "2021-06-24", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "lighting" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Probabilistic Localization of Insect-Scale Drones on Floating-Gate Inverter Arrays", + "authors": [ + "Priyesh Shukla", + "Ankith Muralidhar", + "Nick Iliev", + "Theja Tulabandhula", + "Sawyer B. Fuller", + "Amit Ranjan Trivedi" + ], + "abstract": "We propose a novel compute-in-memory (CIM)-based ultra-low-power framework for probabilistic localization of insect-scale drones. The conventional probabilistic localization approaches rely on the three-dimensional (3D) Gaussian Mixture Model (GMM)-based representation of a 3D map. A GMM model with hundreds of mixture functions is typically needed to adequately learn and represent the intricacies of the map. Meanwhile, localization using complex GMM map models is computationally intensive. Since insect-scale drones operate under extremely limited area/power budget, continuous localization using GMM models entails much higher operating energy -- thereby, limiting flying duration and/or size of the drone due to a larger battery. Addressing the computational challenges of localization in an insect-scale drone using a CIM approach, we propose a novel framework of 3D map representation using a harmonic mean of \"Gaussian-like\" mixture (HMGM) model. The likelihood function useful for drone localization can be efficiently implemented by connecting many multi-input inverters in parallel, each programmed with the parameters of the 3D map model represented as HMGM. When the depth measurements are projected to the input of the implementation, the summed current of the inverters emulates the likelihood of the measurement. We have characterized our approach on an RGB-D indoor localization dataset. The average localization error in our approach is $\\sim$0.1125 m which is only slightly degraded than software-based evaluation ($\\sim$0.08 m). Meanwhile, our localization framework is ultra-low-power, consuming as little as $\\sim$17 $\\mu$W power while processing a depth frame in 1.33 ms over hundred pose hypotheses in the particle-filtering (PF) algorithm used to localize the drone.", + "arxiv_url": "http://arxiv.org/abs/2102.08247v2", + "pdf_url": "http://arxiv.org/pdf/2102.08247v2", + "published_date": "2021-02-16", + "categories": [ + "cs.RO", + "cs.AR", + "eess.IV", + "B.7; I.2.9" + ], + "github_url": "", + "keywords": [ + "efficient", + "ar", + "localization" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Visual Analysis of Large Multivariate Scattered Data using Clustering and Probabilistic Summaries", + "authors": [ + "Tobias Rapp", + "Christoph Peters", + "Carsten Dachsbacher" + ], + "abstract": "Rapidly growing data sizes of scientific simulations pose significant challenges for interactive visualization and analysis techniques. In this work, we propose a compact probabilistic representation to interactively visualize large scattered datasets. In contrast to previous approaches that represent blocks of volumetric data using probability distributions, we model clusters of arbitrarily structured multivariate data. In detail, we discuss how to efficiently represent and store a high-dimensional distribution for each cluster. We observe that it suffices to consider low-dimensional marginal distributions for two or three data dimensions at a time to employ common visual analysis techniques. Based on this observation, we represent high-dimensional distributions by combinations of low-dimensional Gaussian mixture models. We discuss the application of common interactive visual analysis techniques to this representation. In particular, we investigate several frequency-based views, such as density plots in 1D and 2D, density-based parallel coordinates, and a time histogram. We visualize the uncertainty introduced by the representation, discuss a level-of-detail mechanism, and explicitly visualize outliers. Furthermore, we propose a spatial visualization by splatting anisotropic 3D Gaussians for which we derive a closed-form solution. Lastly, we describe the application of brushing and linking to this clustered representation. Our evaluation on several large, real-world datasets demonstrates the scaling of our approach.", + "arxiv_url": "http://arxiv.org/abs/2008.09544v2", + "pdf_url": "http://arxiv.org/pdf/2008.09544v2", + "published_date": "2020-08-21", + "categories": [ + "cs.GR" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "efficient", + "compact", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Algebraic 3D Graphic Statics: reciprocal constructions", + "authors": [ + "Márton Hablicsek", + "Masoud Akbarzadeh", + "Yi Guo" + ], + "abstract": "The recently developed 3D graphic statics (3DGS) lacks a rigorous mathematical definition relating the geometrical and topological properties of the reciprocal polyhedral diagrams as well as a precise method for the geometric construction of these diagrams. This paper provides a fundamental algebraic formulation for 3DGS by developing equilibrium equations around the edges of the primal diagram and satisfying the equations by the closeness of the polygons constructed by the edges of the corresponding faces in the dual/reciprocal diagram. The research provides multiple numerical methods for solving the equilibrium equations and explains the advantage of using each technique. The approach of this paper can be used for compression-and-tension combined form-finding and analysis as it allows constructing both the form and force diagram based on the interpretation of the input diagram. Besides, the paper expands on the geometric/static degrees of (in)determinacies of the diagrams using the algebraic formulation and shows how these properties can be used for the constrained manipulation of the polyhedrons in an interactive environment without breaking the reciprocity between the two.", + "arxiv_url": "http://arxiv.org/abs/2007.15720v1", + "pdf_url": "http://arxiv.org/pdf/2007.15720v1", + "published_date": "2020-07-30", + "categories": [ + "cs.CG", + "J.6; J.2" + ], + "github_url": "", + "keywords": [ + "ar", + "face", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Algebraic 3D Graphic Statics: Constrained Areas", + "authors": [ + "Masoud Akbarzadeh", + "Marton Hablicsek" + ], + "abstract": "This research provides algorithms and numerical methods to geometrically control the magnitude of the internal and external forces in the reciprocal diagrams of 3D/Polyhedral Graphic statics (3DGS). In 3DGS, the form of the structure and its equilibrium of forces is represented by two polyhedral diagrams that are geometrically and topologically related. The areas of the faces of the force diagram represent the magnitude of the internal and external forces in the system. For the first time, the methods of this research allow the user to control and constrain the areas and edge lengths of the faces of general polyhedrons that can be convex, self-intersecting, or concave. As a result, a designer can explicitly control the force magnitudes in the force diagram and explore the equilibrium of a variety of compression and tension-combined funicular structural forms. In this method, a quadratic formulation is used to compute the area of a single face based on its edge lengths. The approach is applied to manipulating the face geometry with a predefined area and the edge lengths. Subsequently, the geometry of the polyhedron is updated with newly changed faces. This approach is a multi-step algorithm where each step includes computing the geometry of a single face and updating the polyhedral geometry. One of the unique results of this framework is the construction of the zero-area, self-intersecting faces, where the sum of the signed areas of a self-intersecting face is zero, representing a member with zero force in the form diagram. The methodology of this research can clarify the equilibrium of some systems that could not be previously justified using reciprocal polyhedral diagrams. Therefore, it generalizes the principle of the equilibrium of polyhedral frames and opens a completely new horizon in the design of highly-sophisticated funicular polyhedral structures beyond compression-only systems.", + "arxiv_url": "http://arxiv.org/abs/2007.15133v1", + "pdf_url": "http://arxiv.org/pdf/2007.15133v1", + "published_date": "2020-07-29", + "categories": [ + "cs.CG", + "physics.app-ph", + "J.6; J.2" + ], + "github_url": "", + "keywords": [ + "geometry", + "ar", + "face", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "3D-GMNet: Single-View 3D Shape Recovery as A Gaussian Mixture", + "authors": [ + "Kohei Yamashita", + "Shohei Nobuhara", + "Ko Nishino" + ], + "abstract": "In this paper, we introduce 3D-GMNet, a deep neural network for 3D object shape reconstruction from a single image. As the name suggests, 3D-GMNet recovers 3D shape as a Gaussian mixture. In contrast to voxels, point clouds, or meshes, a Gaussian mixture representation provides an analytical expression with a small memory footprint while accurately representing the target 3D shape. At the same time, it offers a number of additional advantages including instant pose estimation and controllable level-of-detail reconstruction, while also enabling interpretation as a point cloud, volume, and a mesh model. We train 3D-GMNet end-to-end with single input images and corresponding 3D models by introducing two novel loss functions, a 3D Gaussian mixture loss and a 2D multi-view loss, which collectively enable accurate shape reconstruction as kernel density estimation. We thoroughly evaluate the effectiveness of 3D-GMNet with synthetic and real images of objects. The results show accurate reconstruction with a compact representation that also realizes novel applications of single-image 3D reconstruction.", + "arxiv_url": "http://arxiv.org/abs/1912.04663v2", + "pdf_url": "http://arxiv.org/pdf/1912.04663v2", + "published_date": "2019-12-10", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "3d reconstruction", + "ar", + "shape reconstruction", + "compact" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Correcting the formalism governing Bloch Surface Waves excited by 3D Gaussian beams", + "authors": [ + "Fadi Issam Baida", + "Maria-Pilar Bernal" + ], + "abstract": "Due to the growing number of publications and applications based on the exploitation of Bloch surface waves and the gross errors and approximations that are regularly used to evaluate the properties of this type of wave, we judge seriously important for successful interpretation and understanding of experiments to implement adapted formalism allowing to extract the relevant information. Through a comprehensive calculation supported by an analytical development, we establish a generalized formula for the propagation length which is different from what is usually employed in the literature. We also demonstrate that the Goos-H\\\"anchen shift becomes an extrinsic property that depends on the beam dimension with an asymptotic behavior limiting its value to that of the propagation length. The proposed theoretical scheme allows predicting some new and unforeseen results such as the effect due to a slight deviation of the angle of incidence or of the beam-waist position with respect to the structure. This formalism can be used to describe any polarization-dependent resonant structure illuminated by a polarized Gaussian beam.", + "arxiv_url": "http://arxiv.org/abs/1907.03476v1", + "pdf_url": "http://arxiv.org/pdf/1907.03476v1", + "published_date": "2019-07-08", + "categories": [ + "physics.optics", + "physics.comp-ph" + ], + "github_url": "", + "keywords": [ + "understanding", + "3d gaussian", + "ar", + "face" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Power-spectrum simulations of radial redshift distributions", + "authors": [ + "Andrei Ryabinkov", + "Aleksandr Kaminker" + ], + "abstract": "On the base of the simplest model of a modulation of 3D Gaussian field in $k$-space we produce a set of simulations to bring out the effects of a modulating function $f_{\\rm mod} (k)=f_1 (k) + f_2 (k)$ on power spectra of radial (shell-like) distributions of cosmological objects, where a model function $f_1 (k)$ reproduces the smoothed power spectrum of underlying 3D density fluctuations, while $f_2 (k)$ is a wiggling function imitating the baryon acoustic oscillations (BAO). It is shown that some excess of realizations of simulated radial distributions actually displays quasi-periodical components with periods about a characteristic scale $2\\pi/k \\sim 100~h^{-1}$~Mpc detected as power-spectrum peaks in vicinity of the first maximum of the modulation function $f_2 (k)$. We revised our previous estimations of the significance of such peaks and found that they were largely overestimated. Thereby quasi-periodical components appearing in some radial distributions of matter are likely to be stochastic (rather than determinative), while the amplitudes of the respective spectral peaks can be quite noticeable. They are partly enhanced by smooth part of the modulating function $f_1(k)$ and, to a far lesser extent, by effects of the BAO (i.e. $f_2(k)$). The results of the simulations match quite well with statistical properties of the radial distributions of the brightest cluster galaxies (BCGs).", + "arxiv_url": "http://arxiv.org/abs/1905.06283v1", + "pdf_url": "http://arxiv.org/pdf/1905.06283v1", + "published_date": "2019-05-15", + "categories": [ + "astro-ph.CO" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Deep AutoEncoder-based Lossy Geometry Compression for Point Clouds", + "authors": [ + "Wei Yan", + "Yiting shao", + "Shan Liu", + "Thomas H Li", + "Zhu Li", + "Ge Li" + ], + "abstract": "Point cloud is a fundamental 3D representation which is widely used in real world applications such as autonomous driving. As a newly-developed media format which is characterized by complexity and irregularity, point cloud creates a need for compression algorithms which are more flexible than existing codecs. Recently, autoencoders(AEs) have shown their effectiveness in many visual analysis tasks as well as image compression, which inspires us to employ it in point cloud compression. In this paper, we propose a general autoencoder-based architecture for lossy geometry point cloud compression. To the best of our knowledge, it is the first autoencoder-based geometry compression codec that directly takes point clouds as input rather than voxel grids or collections of images. Compared with handcrafted codecs, this approach adapts much more quickly to previously unseen media contents and media formats, meanwhile achieving competitive performance. Our architecture consists of a pointnet-based encoder, a uniform quantizer, an entropy estimation block and a nonlinear synthesis transformation module. In lossy geometry compression of point cloud, results show that the proposed method outperforms the test model for categories 1 and 3 (TMC13) published by MPEG-3DG group on the 125th meeting, and on average a 73.15\\% BD-rate gain is achieved.", + "arxiv_url": "http://arxiv.org/abs/1905.03691v1", + "pdf_url": "http://arxiv.org/pdf/1905.03691v1", + "published_date": "2019-04-18", + "categories": [ + "cs.CV", + "cs.MM", + "eess.IV" + ], + "github_url": "", + "keywords": [ + "geometry", + "ar", + "autonomous driving", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "3D cosmic shear: numerical challenges, 3D lensing random fields generation and Minkowski Functionals for cosmological inference", + "authors": [ + "A. Spurio Mancini", + "P. L. Taylor", + "R. Reischke", + "T. Kitching", + "V. Pettorino", + "B. M. Schäfer", + "B. Zieser", + "Ph. M. Merkel" + ], + "abstract": "Cosmic shear - the weak gravitational lensing effect generated by fluctuations of the gravitational tidal fields of the large-scale structure - is one of the most promising tools for current and future cosmological analyses. The spherical-Bessel decomposition of the cosmic shear field (\"3D cosmic shear\") is one way to maximise the amount of redshift information in a lensing analysis and therefore provides a powerful tool to investigate in particular the growth of cosmic structure that is crucial for dark energy studies. However, the computation of simulated 3D cosmic shear covariance matrices presents numerical difficulties, due to the required integrations over highly oscillatory functions. We present and compare two numerical methods and relative implementations to perform these integrations. We then show how to generate 3D Gaussian random fields on the sky in spherical coordinates, starting from the 3D cosmic shear covariances. To validate our field-generation procedure, we calculate the Minkowski functionals associated with our random fields, compare them with the known expectation values for the Gaussian case and demonstrate parameter inference from Minkowski functionals from a cosmic shear survey. This is a first step towards producing fully 3D Minkowski functionals for a lognormal field in 3D to extract Gaussian and non-Gaussian information from the cosmic shear field, as well as towards the use of Minkowski functionals as a probe of cosmology beyond the commonly used two-point statistics.", + "arxiv_url": "http://arxiv.org/abs/1807.11461v3", + "pdf_url": "http://arxiv.org/pdf/1807.11461v3", + "published_date": "2018-07-30", + "categories": [ + "astro-ph.CO" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "survey" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Hybrid Point Cloud Attribute Compression Using Slice-based Layered Structure and Block-based Intra Prediction", + "authors": [ + "Yiting Shao", + "Qi Zhang", + "Ge Li", + "Zhu Li" + ], + "abstract": "Point cloud compression is a key enabler for the emerging applications of immersive visual communication, autonomous driving and smart cities, etc. In this paper, we propose a hybrid point cloud attribute compression scheme built on an original layered data structure. First, a slice-partition scheme and geometry-adaptive k dimensional-tree (kd-tree) method are devised to generate the four-layer structure. Second, we introduce an efficient block-based intra prediction scheme containing a DC prediction mode and several angular modes, in order to exploit the spatial correlation between adjacent points. Third, an adaptive transform scheme based on Graph Fourier Transform (GFT) is Lagrangian optimized to achieve better transform efficiency. The Lagrange multiplier is off-line derived based on the statistics of color attribute coding. Last but not least, multiple reordering scan modes are dedicated to improve coding efficiency for entropy coding. In intra-frame compression of point cloud color attributes, results demonstrate that our method performs better than the state-of-the-art region-adaptive hierarchical transform (RAHT) system, and on average a 29.37$\\%$ BD-rate gain is achieved. Comparing with the test model for category 1 (TMC1) anchor's coding results, which were recently published by MPEG-3DG group on 121st meeting, a 16.37$\\%$ BD-rate gain is obtained.", + "arxiv_url": "http://arxiv.org/abs/1804.10783v1", + "pdf_url": "http://arxiv.org/pdf/1804.10783v1", + "published_date": "2018-04-28", + "categories": [ + "cs.MM" + ], + "github_url": "", + "keywords": [ + "geometry", + "efficient", + "ar", + "autonomous driving", + "compression" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "A Generic Phase between Disordered Weyl Semimetal and Diffusive Metal", + "authors": [ + "Ying Su", + "X. S. Wang", + "X. R. Wang" + ], + "abstract": "Quantum phase transitions of three-dimensional (3D) Weyl semimetals (WSMs) subject to uncorrelated on-site disorder are investigated through quantum conductance calculations and finite-size scaling of localization length. Contrary to previous claims that a direct transition from a WSM to a diffusive metal (DM) occurs, an intermediate phase of Chern insulator (CI) between the two distinct metallic phases should exist due to internode scattering that is comparable to intranode scattering. The critical exponent of localization length is $\\nu\\simeq 1.3$ for both the WSM-CI and CI-DM transitions, in the same universality class of 3D Gaussian unitary ensemble of the Anderson localization transition. The CI phase is confirmed by quantized nonzero Hall conductances in the bulk insulating phase established by localization length calculations. The disorder-induced various plateau-plateau transitions in both the WSM and CI phases are observed and explained by the self-consistent Born approximation. Furthermore, we clarify that the occurrence of zero density of states at Weyl nodes is not a good criterion for the disordered WSM, and there is no fundamental principle to support the hypothesis of divergence of localization length at the WSM-DM transition.", + "arxiv_url": "http://arxiv.org/abs/1701.00905v2", + "pdf_url": "http://arxiv.org/pdf/1701.00905v2", + "published_date": "2017-01-04", + "categories": [ + "cond-mat.dis-nn", + "cond-mat.mes-hall" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "localization" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Bayesian Modeling of Motion Perception using Dynamical Stochastic Textures", + "authors": [ + "Jonathan Vacher", + "Andrew Isaac Meso", + "Laurent U. Perrinet", + "Gabriel Peyré" + ], + "abstract": "A common practice to account for psychophysical biases in vision is to frame them as consequences of a dynamic process relying on optimal inference with respect to a generative model. The present study details the complete formulation of such a generative model intended to probe visual motion perception with a dynamic texture model. It is first derived in a set of axiomatic steps constrained by biological plausibility. We extend previous contributions by detailing three equivalent formulations of this texture model. First, the composite dynamic textures are constructed by the random aggregation of warped patterns, which can be viewed as 3D Gaussian fields. Secondly, these textures are cast as solutions to a stochastic partial differential equation (sPDE). This essential step enables real time, on-the-fly texture synthesis using time-discretized auto-regressive processes. It also allows for the derivation of a local motion-energy model, which corresponds to the log-likelihood of the probability density. The log-likelihoods are essential for the construction of a Bayesian inference framework. We use the dynamic texture model to psychophysically probe speed perception in humans using zoom-like changes in the spatial frequency content of the stimulus. The human data replicates previous findings showing perceived speed to be positively biased by spatial frequency increments. A Bayesian observer who combines a Gaussian likelihood centered at the true speed and a spatial frequency dependent width with a \"slow speed prior\" successfully accounts for the perceptual bias. More precisely, the bias arises from a decrease in the observer's likelihood width estimated from the experiments as the spatial frequency increases. Such a trend is compatible with the trend of the dynamic texture likelihood width.", + "arxiv_url": "http://arxiv.org/abs/1611.01390v2", + "pdf_url": "http://arxiv.org/pdf/1611.01390v2", + "published_date": "2016-11-02", + "categories": [ + "q-bio.NC", + "cs.CV" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "motion", + "human", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Model-based Outdoor Performance Capture", + "authors": [ + "Nadia Robertini", + "Dan Casas", + "Helge Rhodin", + "Hans-Peter Seidel", + "Christian Theobalt" + ], + "abstract": "We propose a new model-based method to accurately reconstruct human performances captured outdoors in a multi-camera setup. Starting from a template of the actor model, we introduce a new unified implicit representation for both, articulated skeleton tracking and nonrigid surface shape refinement. Our method fits the template to unsegmented video frames in two stages - first, the coarse skeletal pose is estimated, and subsequently non-rigid surface shape and body pose are jointly refined. Particularly for surface shape refinement we propose a new combination of 3D Gaussians designed to align the projected model with likely silhouette contours without explicit segmentation or edge detection. We obtain reconstructions of much higher quality in outdoor settings than existing methods, and show that we are on par with state-of-the-art methods on indoor scenes for which they were designed", + "arxiv_url": "http://arxiv.org/abs/1610.06740v1", + "pdf_url": "http://arxiv.org/pdf/1610.06740v1", + "published_date": "2016-10-21", + "categories": [ + "cs.CV" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "outdoor", + "tracking", + "segmentation", + "human", + "ar", + "face", + "body" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Stability of 3D Gaussian vortices in an unbounded, rotating, vertically-stratified, Boussinesq flow: Linear analysis", + "authors": [ + "Mani Mahdinia", + "Pedram Hassanzadeh", + "Philip S. Marcus", + "Chung-Hsiang Jiang" + ], + "abstract": "The linear stability of three-dimensional (3D) vortices in rotating, stratified flows has been studied by analyzing the non-hydrostatic inviscid Boussinesq equations. We have focused on a widely-used model of geophysical and astrophysical vortices, which assumes an axisymmetric Gaussian structure for pressure anomalies in the horizontal and vertical directions. For a range of Rossby number ($-0.5 < Ro < 0.5$) and Burger number ($0.02 < Bu < 2.3$) relevant to observed long-lived vortices, the growth rate and spatial structure of the most unstable eigenmodes have been numerically calculated and presented as a function of $Ro-Bu$. We have found neutrally-stable vortices only over a small region of the $Ro-Bu$ parameter space: cyclones with $Ro \\sim 0.02-0.05$ and $Bu \\sim 0.85-0.95$. However, we have also found that anticyclones in general have slower growth rates compared to cyclones. In particular, the growth rate of the most unstable eigenmode for anticyclones in a large region of the parameter space (e.g., $Ro<0$ and $0.5 \\lesssim Bu \\lesssim 1.3$) is slower than $50$ turn-around times of the vortex (which often corresponds to several years for ocean eddies). For cyclones, the region with such slow growth rates is confined to $0 - ]/ tends to zero at increasing volumes. We also perform the same analysis for the standard overlap for which instead the lack of factorization persists increasing the size of the system. The necessity of a better understanding of the mutual relation between the two overlaps is pointed out.", + "arxiv_url": "http://arxiv.org/abs/cond-mat/0503155v2", + "pdf_url": "http://arxiv.org/pdf/cond-mat/0503155v2", + "published_date": "2005-03-07", + "categories": [ + "cond-mat.dis-nn", + "math-ph", + "math.MP" + ], + "github_url": "", + "keywords": [ + "understanding", + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "3D Continuum radiative transfer in complex dust configurations around young stellar objects and active nuclei II. 3D Structure of the dense molecular cloud core Rho Oph D", + "authors": [ + "J. Steinacker", + "A. Bacmann", + "Th. Henning", + "R. Klessen", + "M. Stickel" + ], + "abstract": "Constraints on the density and thermal 3D structure of the dense molecular cloud core Rho Oph D are derived from a detailed 3D radiative transfer modeling. Two ISOCAM images at 7 and 15 micron are fitted simultaneously by representing the dust distribution in the core with a series of 3D Gaussian density profiles. Size, total density, and position of the Gaussians are optimized by simulated annealing to obtain a 2D column density map. The projected core density has a complex elongated pattern with two peaks. We propose a new method to calculate an approximate temperature in an externally illuminated complex 3D structure from a mean optical depth. This T(tau)-method is applied to a 1.3 mm map obtained with the IRAM 30m telescope to find the approximate 3D density and temperature distribution of the core Rho Oph D. The spatial 3D distribution deviates strongly from spherical symmetry. The elongated structure is in general agreement with recent gravo-turbulent collapse calculations for molecular clouds. We discuss possible ambiguities of the background determination procedure, errors of the maps, the accuracy of the T(tau)-method, and the influence of the assumed dust particle sizes and properties.", + "arxiv_url": "http://arxiv.org/abs/astro-ph/0410635v2", + "pdf_url": "http://arxiv.org/pdf/astro-ph/0410635v2", + "published_date": "2004-10-26", + "categories": [ + "astro-ph" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Spanning avalanches in the three-dimensional Gaussian Random Field Ising Model with metastable dynamics: field dependence and geometrical properties", + "authors": [ + "Francisco-Jose Perez-Reche", + "Eduard Vives" + ], + "abstract": "Spanning avalanches in the 3D Gaussian Random Field Ising Model (3D-GRFIM) with metastable dynamics at T=0 have been studied. Statistical analysis of the field values for which avalanches occur has enabled a Finite-Size Scaling (FSS) study of the avalanche density to be performed. Furthermore, direct measurement of the geometrical properties of the avalanches has confirmed an earlier hypothesis that several kinds of spanning avalanches with two different fractal dimensions coexist at the critical point. We finally compare the phase diagram of the 3D-GRFIM with metastable dynamics with the same model in equilibrium at T=0.", + "arxiv_url": "http://arxiv.org/abs/cond-mat/0403754v1", + "pdf_url": "http://arxiv.org/pdf/cond-mat/0403754v1", + "published_date": "2004-03-31", + "categories": [ + "cond-mat.dis-nn", + "cond-mat.stat-mech" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Finite Size Scaling analysis of the avalanches in the 3d Gaussian Random Field Ising Model with metastable dynamics", + "authors": [ + "F. J. Perez-Reche", + "Eduard Vives" + ], + "abstract": "A numerical study is presented of the 3d Gaussian Random Field Ising Model at T=0 driven by an external field. Standard synchronous relaxation dynamics is employed to obtain the magnetization versus field hysteresis loops. The focus is on the analysis of the number and size distribution of the magnetization avalanches. They are classified as being non-spanning, 1d-spanning, 2d-spanning or 3d-spanning depending on whether or not they span the whole lattice in the different space directions. Moreover, finite-size scaling analysis enables identification of two different types of non-spanning avalanches (critical and supercritical) and two different types of 3d-spanning avalanches (critical and subcritical), whose numbers increase with L as a power-law with different exponents. We conclude by giving a scenario for the avalanches behaviour in the thermodynamic limit.", + "arxiv_url": "http://arxiv.org/abs/cond-mat/0206075v3", + "pdf_url": "http://arxiv.org/pdf/cond-mat/0206075v3", + "published_date": "2002-06-06", + "categories": [ + "cond-mat.dis-nn", + "cond-mat.stat-mech" + ], + "github_url": "", + "keywords": [ + "dynamic", + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "The three-dimensional random field Ising magnet: interfaces, scaling, and the nature of states", + "authors": [ + "A. Alan Middleton", + "Daniel S. Fisher" + ], + "abstract": "The nature of the zero temperature ordering transition in the 3D Gaussian random field Ising magnet is studied numerically, aided by scaling analyses. In the ferromagnetic phase the scaling of the roughness of the domain walls, $w\\sim L^\\zeta$, is consistent with the theoretical prediction $\\zeta = 2/3$. As the randomness is increased through the transition, the probability distribution of the interfacial tension of domain walls scales as for a single second order transition. At the critical point, the fractal dimensions of domain walls and the fractal dimension of the outer surface of spin clusters are investigated: there are at least two distinct physically important fractal dimensions. These dimensions are argued to be related to combinations of the energy scaling exponent, $\\theta$, which determines the violation of hyperscaling, the correlation length exponent $\\nu$, and the magnetization exponent $\\beta$. The value $\\beta = 0.017\\pm 0.005$ is derived from the magnetization: this estimate is supported by the study of the spin cluster size distribution at criticality. The variation of configurations in the interior of a sample with boundary conditions is consistent with the hypothesis that there is a single transition separating the disordered phase with one ground state from the ordered phase with two ground states. The array of results are shown to be consistent with a scaling picture and a geometric description of the influence of boundary conditions on the spins. The details of the algorithm used and its implementation are also described.", + "arxiv_url": "http://arxiv.org/abs/cond-mat/0107489v1", + "pdf_url": "http://arxiv.org/pdf/cond-mat/0107489v1", + "published_date": "2001-07-24", + "categories": [ + "cond-mat.dis-nn", + "cond-mat.stat-mech" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar", + "face" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Tunneling of a Massless Field through a 3D Gaussian Barrier", + "authors": [ + "G. Modanese" + ], + "abstract": "We propose a method for the approximate computation of the Green function of a scalar massless field Phi subjected to potential barriers of given size and shape in spacetime. This technique is applied to the case of a 3D gaussian ellipsoid-like barrier, placed on the axis between two pointlike sources of the field. Instead of the Green function we compute its temporal integral, that gives the static potential energy of the interaction of the two sources. Such interaction takes place in part by tunneling of the quanta of Phi across the barrier. We evaluate numerically the correction to the potential in dependence on the size of the barrier and on the barrier-sources distance.", + "arxiv_url": "http://arxiv.org/abs/hep-th/9808009v2", + "pdf_url": "http://arxiv.org/pdf/hep-th/9808009v2", + "published_date": "1998-08-03", + "categories": [ + "hep-th", + "gr-qc" + ], + "github_url": "", + "keywords": [ + "3d gaussian", + "ar" + ], + "citations": 0, + "semantic_url": "" + }, + { + "title": "Equilibrium and off-equilibrium simulations of the 4d Gaussian spin glass", + "authors": [ + "Giorgio Parisi", + "Federico Ricci-Tersenghi", + "Juan J. Ruiz-Lorenzo" + ], + "abstract": "In this paper we study the on and off-equilibrium properties of the four dimensional Gaussian spin glass. In the static case we determine with more precision that in previous simulations both the critical temperature as well as the critical exponents. In the off-equilibrium case we settle the general form of the autocorrelation function, and show that is possible to obtain dynamically, for the first time, a value for the order parameter.", + "arxiv_url": "http://arxiv.org/abs/cond-mat/9606051v2", + "pdf_url": "http://arxiv.org/pdf/cond-mat/9606051v2", + "published_date": "1996-06-09", + "categories": [ + "cond-mat" + ], + "github_url": "", + "keywords": [ + "dynamic", + "4d", + "ar" ], "citations": 0, "semantic_url": ""