diff --git a/README.md b/README.md index 0c508cc..509154d 100644 --- a/README.md +++ b/README.md @@ -19,7 +19,7 @@ G-buffer Objaverse: High-Quality Rendering Dataset of Objaverse. [Zilong Dong](https://scholar.google.com/citations?user=GHOQKCwAAAAJ&hl=zh-CN&oi=ao), [Liefeng Bo](https://scholar.google.com/citations?user=FJwtMf0AAAAJ&hl=zh-CN) -## [Project page](https://aigc3d.github.io/gobjaverse/) | [YouTube](https://www.youtube.com/watch?v=2uSplFflZFs) | [RichDreamer](https://lingtengqiu.github.io/RichDreamer/) | [ND-Diffusion Model](https://github.com/modelscope/normal-depth-diffusion) +## [Project page](https://aigc3d.github.io/gobjaverse/) | [YouTube](https://www.youtube.com/watch?v=PWweS-EPbJo) | [RichDreamer](https://lingtengqiu.github.io/RichDreamer/) | [ND-Diffusion Model](https://github.com/modelscope/normal-depth-diffusion) ## News diff --git a/index.html b/index.html index 7cb269f..b56da18 100644 --- a/index.html +++ b/index.html @@ -142,7 +142,7 @@

G-buffer Objaverse: High-Quality Render - @@ -206,7 +206,7 @@

Introduction

- G-buffer Objaverse (GObjaverse) is rendered using the TIDE renderer on Objaverse with A10 for about 2000 GPU hours, yielding 30,000,000 images of Albedo, RGB, Depth, and Normal map. We proposed a rendering framework for high quality and high speed dataset rendering. The framework is a hybrid of rasterization and path tracing, the first ray-scene intersection is obtained by hardware rasterization and accurate indirect lighting by full hardware path tracing. Additionally, we using adaptive sampling, denoiser and path-guiding to further speed up the rendering time. In this rendering framework, we render 38 views of a centered object, including 24 views at elevation range from 5° to 30°, rotation = {r × 15° | r ∈ [0, 23]}, and 12 views at elevation from -5° to 5°, rotation = {r × 30° | r ∈ [0, 11]}, and 2 views for top and bottom respectively. In addition, we mannuly split the subset of the objaverse dataset into 10 general categories including Human-Shape (41,557), Animals (28,882), Daily-Used (220,222), Furnitures (19,284), Buildings&&Outdoor (116,545), Transportations (20,075), Plants (7,195), Food (5,314), Electronics (13,252) and Poor-quality (107,001). + G-buffer Objaverse is rendered using the TIDE renderer on Objaverse with A10 for about 2000 GPU hours, yielding 30,000,000 images of Albedo, RGB, Depth, and Normal map. We proposed a rendering framework for high quality and high speed dataset rendering. The framework is a hybrid of rasterization and path tracing, the first ray-scene intersection is obtained by hardware rasterization and accurate indirect lighting by full hardware path tracing. Additionally, we using adaptive sampling, denoiser and path-guiding to further speed up the rendering time. In this rendering framework, we render 38 views of a centered object, including 24 views at elevation range from 5° to 30°, rotation = {r × 15° | r ∈ [0, 23]}, and 12 views at elevation from -5° to 5°, rotation = {r × 30° | r ∈ [0, 11]}, and 2 views for top and bottom respectively. In addition, we mannuly split the subset of the objaverse dataset into 10 general categories including Human-Shape (41,557), Animals (28,882), Daily-Used (220,222), Furnitures (19,284), Buildings&&Outdoor (116,545), Transportations (20,075), Plants (7,195), Food (5,314), Electronics (13,252) and Poor-quality (107,001).

MY ALT TEXT
@@ -227,7 +227,7 @@

Video

- +
@@ -246,7 +246,7 @@

Application

- We have used GObjaverse for training MultiView Normal-Depth diffusion model (ND-MV) and depth-condition MultiView Albedo diffusion model (Albedo-MV), which are employed for 3D object generation through score-distillation sampling (SDS) in RichDreamer . + We have used G-buffer Objaverse (GObjaverse) for training MultiView Normal-Depth diffusion model (ND-MV) and depth-condition MultiView Albedo diffusion model (Albedo-MV), which are employed for 3D object generation through score-distillation sampling (SDS) in RichDreamer .

MY ALT TEXT