Skip to content

Latest commit

 

History

History
195 lines (122 loc) · 10.8 KB

README.md

File metadata and controls

195 lines (122 loc) · 10.8 KB

banner

A curated list of resources closing the gap between machine learning and digital media (computer graphics, computer vision, computational imaging, animation, vfx, game development,...). No in-depth explanations, just an overview of the landscape and possible starting points for further research.

This research field is rather broad and resources tend to intersect multiple fields at the same time. I try to sort them into the category that I believe is most fitting.

Feel free to contribute.


Table of Contents

Audio

  • Real-Time Guitar Amplifier Emulation with Deep Learning (2020), Wright et al. [link] [pdf] [demo]

Character Animation

Papers

  • DeepPhase: Periodic Autoencoders for Learning Motion Phase Manifolds (2022), Starke et al. [link] [pdf] [video]

  • RigNet: Neural Rigging for Articulated Characters (2020), Xu et al. [link] [pdf]

  • Learned Motion Matching (2020), Holden et al. [link] [pdf]

  • Local Motion Phases for Learning Multi-Contact Character Movements (2020), Starke et al. [link] [pdf]

  • Neural state machine for character-scene interactions (2019), Starke et al. [link] [pdf]

  • DReCon: Data-Driven Responsive Control of Physics-Based Characters (2019), Bergamin et al. [link] [pdf]

  • Mode-adaptive neural networks for quadruped motion control (2018), Zhang et al. [link] [pdf]

  • Phase-Functioned Neural Networks for Character Control (2017), Holden et al. [link] [pdf]

Datasets

  • LAFAN1 - Ubisoft La Forge Animation Dataset (2020), Harvey et al. [link]

Projects

  • AI4Animation: Deep Learning, Character Animation, Control [link]

Computer Graphics

Papers

  • Temporally Stable Real-Time Joint Neural Denoising and Supersampling (2022), Thomas et al. [link] [pdf] [video]

  • MaterialGAN: Reflectance Capture using a Generative SVBRDF Model (2020), Guo et al. [link] [pdf]

  • Neural Supersampling for Real-time Rendering (2020), Xiao et al. [link] [pdf]

  • Using Deep Convolutional Neural Networks to Detect Rendered Glitches in Video Games (2020), Ling et al. [link] [pdf]

Talks / Courses / Tutorials / Workshops

  • CreativeAI: Deep Learning for Graphics (2019), Mitra et al. [link]

Projects

  • Real-time style transfer in Unity using deep neural networks (2020), Deliot et al. [link]

Computer Vision

Papers

  • SMALR - Capturing Animal Shape and Texture from Images (2018), Zuffi et al. [link]

Talks / Courses / Tutorials / Workshops

  • 3DGV: Seminar on 3D Geometry and Vision (2020) [link]

Datasets

  • Hypersim: A Photorealistic Synthetic Dataset for Holistic Indoor Scene Understanding (2020), Roberts et al. [link]

  • KITTI-360: A large-scale dataset with 3D&2D annotations (2020), Xie et al. [link]

Neural Rendering

State of the Art / Surveys

  • Advances in Neural Rendering (2022), Tewari et al. [link] [pdf] [video]

  • State of the Art on Neural Rendering (2020), Tewari et al. [link] [pdf]

Papers

  • NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis (2020), Mildenhall et al. [link] [pdf]

  • Deformable Neural Radiance Fields (2020), Park et al. [link] [pdf]

  • NeX: Real-time View Synthesis with Neural Basis Expansion (2021), Wizadwongsa et al. [link] [pdf]

  • Deep Relightable Appearance Models for Animatable Faces (2021), Bi et al. [link] [pdf] [video]

  • GANcraft - Unsupervised 3D Neural Rendering of Minecraft Worlds (2021), Hao et al. [link] [pdf] [video]

  • X-Fields: Implicit Neural View-, Light- and Time-Image Interpolation (2020), Bemana et al. [link] [pdf]

  • Learning to Simulate Dynamic Environments with GameGAN (2020), Kim et al. [link] [pdf]

  • D-NeRF: Neural Radiance Fields for Dynamic Scenes (2020), Pumarola et al. [link] [pdf]

  • VR Facial Animation via Multiview Image Translation (2019), Wei et al. [link] [pdf]

  • Face2Face: Real-time Face Capture and Reenactment of RGB Videos (2019), Thies et al. [link] [pdf]

  • Deep Appearance Models for Face Rendering (2018), Lombardi et al. [link] [pdf]

  • Deep Shading: Convolutional Neural Networks for Screen-Space Shading (2017), Nalbach et al. [link] [pdf]

Talks / Courses / Tutorials / Workshops

Visual Computing

Papers

  • Hierarchical Text-Conditional Image Generation with CLIP Latents (2022), Ramesh et al. [link] [pdf]

  • Zero-Shot Text-to-Image Generation (2021), Ramesh et al. [link] [pdf] [code]

  • Infinite Nature: Perpetual View Generation of Natural Scenes from a Single Image (2020), Liu et al. [link] [pdf] [code] [colab] [video]

  • Stylized Neural Painting (2020), Zou et al. [link] [pdf] [code] [colab] [video]

  • Semantic Image Synthesis with Spatially-Adaptive Normalization (2019), Park et al. [link] [pdf]

  • Photo-Realistic Single Image Super-Resolution Using a Generative Adversarial Network (2016), Ledig et al. [link] [pdf]

Talks / Courses / Tutorials / Workshops

  • TUM AI Lecture Series - AI for 3D Content Creation (2020), Sanja Fidler [video]

License

CC0