Skip to content

Commit

Permalink
update README.md and index.html
Browse files Browse the repository at this point in the history
  • Loading branch information
圆枕 committed Jan 7, 2024
1 parent 32a76ee commit bb75381
Show file tree
Hide file tree
Showing 2 changed files with 5 additions and 5 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ G-buffer Objaverse: High-Quality Rendering Dataset of Objaverse.
[Zilong Dong](https://scholar.google.com/citations?user=GHOQKCwAAAAJ&hl=zh-CN&oi=ao),
[Liefeng Bo](https://scholar.google.com/citations?user=FJwtMf0AAAAJ&hl=zh-CN)

## [Project page](https://aigc3d.github.io/gobjaverse/) | [YouTube](https://www.youtube.com/watch?v=2uSplFflZFs) | [RichDreamer](https://lingtengqiu.github.io/RichDreamer/) | [ND-Diffusion Model](https://github.com/modelscope/normal-depth-diffusion)
## [Project page](https://aigc3d.github.io/gobjaverse/) | [YouTube](https://www.youtube.com/watch?v=PWweS-EPbJo) | [RichDreamer](https://lingtengqiu.github.io/RichDreamer/) | [ND-Diffusion Model](https://github.com/modelscope/normal-depth-diffusion)


## News
Expand Down
8 changes: 4 additions & 4 deletions index.html
Original file line number Diff line number Diff line change
Expand Up @@ -142,7 +142,7 @@ <h1 class="title is-1 publication-title">G-buffer Objaverse: High-Quality Render

<!-- Video link -->
<span class="link-block">
<a href="https://www.youtube.com/watch?v=2uSplFflZFs" target="_blank"
<a href="https://www.youtube.com/watch?v=PWweS-EPbJo" target="_blank"
class="external-link button is-normal is-rounded is-dark">
<span class="icon">
<i class="fab fa-youtube"></i>
Expand Down Expand Up @@ -206,7 +206,7 @@ <h2 class="subtitle has-text-centered">
<h2 class="title is-3">Introduction</h2>
<div class="content has-text-justified">
<p>
G-buffer Objaverse (GObjaverse) is rendered using the <a href="https://developer.aliyun.com/article/784784?spm=a2c6h.14164896.0.0.d7c247c5q1Pb9G&scm=20140722.S_community@@%E6%96%87%E7%AB%A0@@784784._.ID_784784-RL_tidejs-LOC_search~UND~community~UND~item-OR_ser-V_3-P0_0" target="_blank" style="color: rgb(47, 141, 255);">TIDE </a> renderer on Objaverse with A10 for about 2000 GPU hours, yielding 30,000,000 images of Albedo, RGB, Depth, and Normal map. We proposed a rendering framework for high quality and high speed dataset rendering. The framework is a hybrid of rasterization and path tracing, the first ray-scene intersection is obtained by hardware rasterization and accurate indirect lighting by full hardware path tracing. Additionally, we using adaptive sampling, denoiser and path-guiding to further speed up the rendering time. In this rendering framework, we render 38 views of a centered object, including 24 views at elevation range from 5° to 30°, rotation = {r × 15° | r ∈ [0, 23]}, and 12 views at elevation from -5° to 5°, rotation = {r × 30° | r ∈ [0, 11]}, and 2 views for top and bottom respectively. In addition, we mannuly split the subset of the objaverse dataset into 10 general categories including Human-Shape (41,557), Animals (28,882), Daily-Used (220,222), Furnitures (19,284), Buildings&&Outdoor (116,545), Transportations (20,075), Plants (7,195), Food (5,314), Electronics (13,252) and Poor-quality (107,001).
G-buffer Objaverse is rendered using the <a href="https://developer.aliyun.com/article/784784?spm=a2c6h.14164896.0.0.d7c247c5q1Pb9G&scm=20140722.S_community@@%E6%96%87%E7%AB%A0@@784784._.ID_784784-RL_tidejs-LOC_search~UND~community~UND~item-OR_ser-V_3-P0_0" target="_blank" style="color: rgb(47, 141, 255);">TIDE </a> renderer on Objaverse with A10 for about 2000 GPU hours, yielding 30,000,000 images of Albedo, RGB, Depth, and Normal map. We proposed a rendering framework for high quality and high speed dataset rendering. The framework is a hybrid of rasterization and path tracing, the first ray-scene intersection is obtained by hardware rasterization and accurate indirect lighting by full hardware path tracing. Additionally, we using adaptive sampling, denoiser and path-guiding to further speed up the rendering time. In this rendering framework, we render 38 views of a centered object, including 24 views at elevation range from 5° to 30°, rotation = {r × 15° | r ∈ [0, 23]}, and 12 views at elevation from -5° to 5°, rotation = {r × 30° | r ∈ [0, 11]}, and 2 views for top and bottom respectively. In addition, we mannuly split the subset of the objaverse dataset into 10 general categories including Human-Shape (41,557), Animals (28,882), Daily-Used (220,222), Furnitures (19,284), Buildings&&Outdoor (116,545), Transportations (20,075), Plants (7,195), Food (5,314), Electronics (13,252) and Poor-quality (107,001).
</p>
<img src="static/images/intro.png" alt="MY ALT TEXT"/>
</div>
Expand All @@ -227,7 +227,7 @@ <h2 class="title is-3">Video</h2>

<div class="publication-video">
<!-- Youtube embed code here -->
<iframe src="https://www.youtube.com/embed/2uSplFflZFs" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
<iframe src="https://www.youtube.com/embed/PWweS-EPbJo" frameborder="0" allow="autoplay; encrypted-media" allowfullscreen></iframe>
</div>
</div>
</div>
Expand All @@ -246,7 +246,7 @@ <h2 class="title is-3">Application</h2>
<!-- Your image here -->
<!-- <img src="static/images/application.png" alt="MY ALT TEXT"/>-->
<h2 class="content has-text-justified">
We have used GObjaverse for training MultiView Normal-Depth diffusion model (<a href="https://github.com/modelscope/normal-depth-diffusion/" style="color: rgb(47, 141, 255);" target="_blank">ND-MV</a>) and depth-condition MultiView Albedo diffusion model (<a href="https://github.com/modelscope/normal-depth-diffusion/" style="color: rgb(47, 141, 255);" target="_blank">Albedo-MV</a>), which are employed for 3D object generation through score-distillation sampling (SDS) in <a href="https://aigc3d.github.io/richdreamer/" style="color: rgb(47, 141, 255);" target="_blank">RichDreamer </a> .
We have used G-buffer Objaverse (GObjaverse) for training MultiView Normal-Depth diffusion model (<a href="https://github.com/modelscope/normal-depth-diffusion/" style="color: rgb(47, 141, 255);" target="_blank">ND-MV</a>) and depth-condition MultiView Albedo diffusion model (<a href="https://github.com/modelscope/normal-depth-diffusion/" style="color: rgb(47, 141, 255);" target="_blank">Albedo-MV</a>), which are employed for 3D object generation through score-distillation sampling (SDS) in <a href="https://aigc3d.github.io/richdreamer/" style="color: rgb(47, 141, 255);" target="_blank">RichDreamer </a> .
</h2>
<img src="static/images/application.png" alt="MY ALT TEXT"/>
</div>
Expand Down

0 comments on commit bb75381

Please sign in to comment.