Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unified 2D Rendering #5

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Conversation

AnneKitsune
Copy link
Contributor

@AnneKitsune AnneKitsune commented Nov 14, 2018

This RFC proposes an architecture that converts user-friendly components into reusable pass-friendly components.

Rendered version: Rendered

Let me know of any questions you have. :)

Copy link
Member

@torkleyy torkleyy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think consistency is generally a goal we should pursue.

I'd like to know more about the drawbacks; can you elaborate on the existing point and eventualy add more? Have you looked into the code already to see if other changes are required internally?

@AnneKitsune
Copy link
Contributor Author

I'll look more into that shortly. The main thing that could cause issues is that systems that work on both 3d and 2d transforms will need to either be split or manipulate both transform3d and 2d at once.

@Rhuagh
Copy link
Member

Rhuagh commented Nov 15, 2018

What is the motivation for refactoring Material for 3d?

@AnneKitsune
Copy link
Contributor Author

AnneKitsune commented Nov 15, 2018

I think a mesh always have an albedo texture but not always the other texture types, so I made it to reflect that. In the case where you use an alternative 3d pass, for example Wireframe, you can omit the texture and use a Wireframe tag component. The whole concept is that instead of having a PassSelector component, its the attached components that determine the behavior. If you have PbmMaterial and TextureHandle, you draw using the pbm pass. If you don't have PbmMaterial but you have TextureHandle, you use DrawFlat.

@zakarumych
Copy link
Member

zakarumych commented Nov 15, 2018

I'm doing quick read and writing here some statements and questions.

  • "2D Mesh with Texture" no good reason to have mesh with quad in it.
  • Don't we already has Transform3D and Transform2D types from nalgebra?
  • Is ScreenSpace a marker component? It could be confusing if someone would treat all Transform2D values consistently while some of them are in screen-space coordinates.
  • Many effects may lead to combinatoric explosion. We should make have them under control. Also it seems as waste of space to have them as components.
  • 100_000 circle meshes will be drawn way slower than 100_000 quads with circle as a texture or specific fragment shader that sets alpha to 0 outside circular area and 1 inside it.
  • It would be great to rasterize SVG with help of the GPU. But if you only need to do it once (scene loading) then you shouldn't bother and rasterize on CPU. But if you need to draw SVG with different size each frame then you definitely need GPU powers for this. The same goes with text rasterization. The best would be to have pipeline for svg and text raster. We can make it usable manually (to rasterize on load) and as part of the render-graph.

@zakarumych
Copy link
Member

BTW the thing you absolutely need to draw 100_000 of something is instanced drawing.
Probably indirect if you want to draw different numbers of different things from frame to frame.

@Rhuagh
Copy link
Member

Rhuagh commented Nov 15, 2018

@Jojolepro if so we should have separate materials for all passes, but that will also make it harder to swap passes during development.

@AnneKitsune
Copy link
Contributor Author

@Rhuagh you are right, but the number of passes will go down by a lot with this rfc. All 2d passes will be merged into two passes (lit and flat). 3d stays with the same passes. I don't want to touch 3d right now, but I imagine something like:

DrawFlat:

  • Mesh
  • Texture

DrawShaded:

  • Mesh
  • Texture
  • Shaded (tag)

DrawPbm:

  • Mesh
  • Texture (albedo)
  • PbmMaterial (metallic + others)

@AnneKitsune
Copy link
Contributor Author

@omni-viral

"2D Mesh with Texture" no good reason to have mesh with quad in it.

There won't be a Mesh attached to the entities. I was thinking of a internal square mesh that is reused for everything, like in the current DrawUi Pass.

Don't we already has Transform3D and Transform2D types from nalgebra?

I haven't even used nalgebra once since it was merge, but I'll look into it. :)

Is ScreenSpace a marker component? It could be confusing if someone would treat all Transform2D values consistently while some of them are in screen-space coordinates.

Yeah its a tag component (NullStorage). Normally, you don't want to join over every single entity that has a Transform2D without an extra constraint in your .join().

If that's what you want to do, you can always use AntiStorage:

(&transforms2d, &!screenspace).join() // Joins over transforms 2d entities that don't have screenspace.

Many effects may lead to combinatoric explosion.

Not sure what to do about that exactly.

Also it seems as waste of space to have them as components.

Technically, even if you have one Component for each effect (i.e Tint, Glow, Blur), most of them will either be toggles (NullStorage, don't take space except in a bitset), or simple configurations (a couple of f32 in a DenseVecStorage).

It takes less space like this if you only use one or two effects at once, instead of having a big component for all effects with 20+ f32 inside.

100_000 circle meshes will be drawn way slower than 100_000 quads with circle as a texture or specific fragment shader that sets alpha to 0 outside circular area and 1 inside it.

Perfect! So it makes sense not allow custom 2d meshes, but use sprites combined with an alpha mask (feature discussed in the ui rfc.)

It would be great to rasterize SVG with help of the GPU.

Discussed on discord. Mostly, rasterization will almost exclusively happen at the start of the scene, except for rapidly changing text (timers for example). However, gfx_glyph and other libraries already have individual character caching, which solves this issue.

Thanks a lot for your feedback @omni-viral and @Rhuagh !

@Rhuagh
Copy link
Member

Rhuagh commented Nov 15, 2018

Shaded use the normal texture also i think?

@AnneKitsune
Copy link
Contributor Author

It seems to use Emission https://github.com/amethyst/amethyst/blob/43b70ea3afc52c92e425029ccb417972d93cda73/amethyst_renderer/src/pass/shaders/fragment/shaded.glsl#L41

In which case, the component should probably be ShadedMaterial { emission: Handle<Texture> }

@kabergstrom
Copy link

kabergstrom commented Nov 15, 2018

While I applaud the effort to unify the rendering of multiple types of components to share more code and passes, I think some of the other assumptions in this RFC may come back to bite us.

Specifically, I believe introducing the Transform2D as you have defined it will be very damaging.

  • Render ordering between Transform2D and Transform3D is ambiguous since Transform2D does not define a Z. Render order between Transform2D and another Transform2D is also unspecified due to the Vector2 representation.
  • Since Transform2D doesn't define a Z, I assume that it will always be rendered on top of Transform3D. This would make Transform2D essentially live in a separate world, which means 3D and 2D cannot be mixed i.e. showing UI behind a 3D model. Layering particle systems on top of UI would necessitate a separate 2D implementations instead of reusing the 3D implementation.
  • Since Transform2D only contains a f32 for rotation and a Vector2 for position, it will be impossible to apply transforms that make the UI appear to be rotated "into the scene" (see reference in MVC2).
  • Since Transform2D does not contain a depth value, it will be impossible to use Lights to light up 2D elements with normal maps in a way where the light is not coming directly from either side.
  • Since the rendering pass requires Transform2D, you will not be able to render UI elements "projected onto" 3D elements.

I believe it's good to improve ergonomics but I don't think this is the right way to do it. Instead, I would recommend two things:

  • Introduce a simpler 2D interface on top of Transform3D
  • Introduce support for multiple scenes with a camera in each, or multiple cameras in a single scene. Some way of defining a different view matrix for different sets of objects. This is a more flexible solution than a "ScreenSpace" component - you instead create a camera with a view matrix that essentially does the same thing.

@AnneKitsune
Copy link
Contributor Author

AnneKitsune commented Nov 15, 2018

@kabergstrom

Render ordering between Transform2D and Transform3D is ambiguous since Transform2D does not define a Z. Render order between Transform2D and another Transform2D is also unspecified due to the Vector2 representation.

Rendering 2D always happens after rendering 3D, no matter what.
Also Transform2D uses Vector3 for position. (z = depth, the same used for 3d)

For reference, I explicitly wrote the Transform2D definition here: https://github.com/amethyst/rfcs/pull/5/files#diff-80f13f763d9f72432e39778b4b8a9494R189

showing UI behind a 3D model

I have taken this into account when designing the rfc. While you could in theory use the standard approach to draw text in the 3d space, it doesn't work well with layouting. The current solution is that when you want 3d ui, you first draw it on a texture, which is then put onto a 3d mesh (flat plane) in the world. You will also be able to trigger UiEvents using raycasts in the 3d world (I saw a game do that, its pretty cool). This is part of the bigger ui rfc, which I am still working on. More details coming soon ;)

Since Transform2D only contains a f32 for rotation and a Vector2 for position, it will be impossible to apply transforms that make the UI appear to be rotated "into the scene"

The same way I mentionned in the previous point. It will be on a 3d mesh following the player with a low rendering priority (also something that will require an rfc). Low rendering priorities is what is used to render objects over other objects, even when the depth doesn't make sense. I.e a gun clipping through a wall in an fps game will still be rendered on top (roblox excluded).

Since Transform2D does not contain a depth value, it will be impossible to use Lights to light up 2D elements with normal maps in a way where the light is not coming directly from either side.

I assumed lights could only come from the side in 2 dimensions. Was it a wrong assumption?

Introduce support for multiple scenes [...]

That's a good idea, but it was already discussed previously. There wasn't a concensus, on whether it was a good idea or not, but no one has a way to go forward with it without making performance and usability worse.

Thanks for the comments!

@AnneKitsune
Copy link
Contributor Author

Also of note: I'm not 100% satisfied with the current definition of Transform2D. I'm trying to find if I can cut more data off it somehow. Its a bit tedious when you take layouting and parenting into account. I'm still opened to proposition for that type.

@kabergstrom
Copy link

Rendering 2D always happens after rendering 3D, no matter what.
Also Transform2D uses Vector3 for position. (z = depth, the same used for 3d)

Sorry, I misread the position vector.

Since the renderer supports 3D I don't see a technical reason to disregard Z and what is effectively the 3D world position of a 2D element. I believe mixing UI/Text/Images with 3D is super important to enable users to create more compelling 2D experiences.

I think an interesting proposition would be to see how we can use the 3D passes to render all 2D elements, while still providing the user with the experience they want.

From a usability standpoint, users don't want to have to care about a third dimension or weird camera transforms when they start out but I think this can be solved with the IR.

I have taken this into account when designing the rfc. While you could in theory use the standard approach to draw text in the 3d space, it doesn't work well with layouting. The current solution is that when you want 3d ui, you first draw it on a texture, which is then put onto a 3d mesh (flat plane) in the world. You will also be able to trigger UiEvents using raycasts in the 3d world (I saw a game do that, its pretty cool). This is part of the bigger ui rfc, which I am still working on. More details coming soon ;)

Layouting can work fine in 3D if it only considers 2 dimensions of the space defined by a parent. Sure, it would be weird to rotate something that has been laid out in 2 dimensions, but don't do that. :)

I assumed lights could only come from the side in 2 dimensions. Was it a wrong assumption?

Even if a world is authored using 2D art, normal maps can still be generated and used to provide a 3D feeling if the engine provides 3D rendering.

@zakarumych
Copy link
Member

zakarumych commented Nov 16, 2018

I was thinking of a internal square mesh that is reused for everything, like in the current DrawUi Pass.

No. You don't need that either. You can just hardcode quad into vertex shader.
But for text rendering you'd need vertex buffer storing vertices with quad per glyph.
And you will write into it on per-frame basis. So Mesh abstraction won't be useful here.
But it's OK. Rendering code can be unsafe and not super ergonomic.

@AnneKitsune
Copy link
Contributor Author

I need more opinions on the Guide-Level Explanation -> Transform2D section.

I want to know who is for and who is against separating Transform into Transform2D and Transform3D.

Drawback:
More code: You now either need to make one System for 2D and one for 3D, or handle both components in the same system.
Will duplicate the TransformSystem logic and GlobalTransform too, because of major differences in the handling.
Positive effects:
Systems have the 3d vs 2d vs both usage specified at the type level. You wouldn't want someone using a 3d system on a 2d scene in most cases.
Better separation of 2d and 3d physics.
Simpler 2d Transform api, constrained by the type system instead of by conventions.
Faster performance for 2d.

Here's the sample implementation of both Transform3D and Transform2D:

3D:

pub struct Transform3D {
    /// Translation + rotation value
    iso: Isometry3<f32>,
    /// Scale vector
    scale: Vector3<f32>,
}

2D:

pub struct Transform2D {
    translation: Vector2<f32>,
    layer: u32, // z coordinate, but faster. ++ layer = on top
    rotation: f32, // Rad, anti-clockwise
    dimension: Vector2<f32>, // All 2d elements have fixed dimensions. This is used by the renderer(read sprites and ui) and potentially by physics.
    scale: Vector2<f32>, // Used to multiply your own dimension AND the ones of the child entities.
}

If you are against separating, we will need to add a separated Component to all 2D entities named "Dimensions", which will hold the dimension value. The z coordinate and rotation will make less sense than in 3d, and will have worse performance.

Let me know your vote with a 👍 👎 or with a comment. Thanks!

@LucioFranco
Copy link
Member

@amethyst/render-devs please take a look at this.

@zakarumych
Copy link
Member

@Jojolepro I need strong motivation arguments. Motivation paragraph in RFC is ... I have no idea what you are talking about there :)

@kabergstrom
Copy link

kabergstrom commented Dec 16, 2018

I think a distinction between 2D and 3D elements in the form of a Dimensions is good, though I believe it would be cleaner to make Transform2D a strict superset of Transform in terms of data representation.

pub struct Transform2D { 
/// Translation + rotation value 
iso: Isometry3<f32>,
/// Scale vector 
scale: Vector3<f32>, 
dimension: Vector2<f32>, // All 2d elements have fixed dimensions. This is used by the renderer(read sprites and ui) and potentially by physics. 
}

This will make it much easier to write code and tools that work on both types.

@AnneKitsune
Copy link
Contributor Author

@omni-viral To add a rendering feature, you need to add it into all 2d passes by copy pasting. The motivation here is to merge the components into more reusable ones so we can remove the copy paste.

Less copy paste = less bugs and maintenance.

@AnneKitsune
Copy link
Contributor Author

@kabergstrom that's the alternative solution of adding a Dimensions component. What you are proposing involves the same drawbacks as my proposition, but without the advantages.

@torkleyy
Copy link
Member

If we separate them, we at least need a generic API for working with them. I think @Rhuagh has experience with this ;)

@zakarumych
Copy link
Member

Again. Why not use nalgebra types (directly or wrapped) instead of reinventing bikes?
Adding more data into Transform* type seems odd to me. For example there should be better place for dimension.

@kabergstrom
Copy link

@Jojolepro From a tooling perspective, I disagree. Tools would most likely have an easier time to work with a superset of Transform than a very different data representation.

I also think it's better to retain 3D rotations even when working with 2D because you may want to render these in a 3D world. And in my opinion, rendering UI to texture before rendering in-world is not acceptable for a number of use-cases, but especially for high-resolution displays due to the VRAM requirements of intermediate texture buffers.

@kabergstrom
Copy link

Oh, one more thing I thought about: Pivots. Since the Transform2D concept defines dimensions, it will be essential to be able to define a pivot point for rotations.

@torkleyy
Copy link
Member

torkleyy commented Jan 5, 2019

I'd like to move to close this RFC. There are some interesting concepts I don't want to reject, but what I want to do is move to keep the current Transform, dropping the idea of a split.

As for the UI, I'd be in favour of using the Transform component; special constructors can be added to improve the usability.

Reasoning:

  • splitting is a breaking change
  • it's a lot of work
  • makes tooling harder
  • duplicates code handling transforms

@AnneKitsune
Copy link
Contributor Author

@kabergstrom @Frizi @omni-viral

Feature status:
Transform2D: Delayed/Possibly cancelled
ScreenSpace: Planned/Blocker as high priority in Ui team
Overflow: Planned/Blocker as High priority in Ui team
MaterialPbm: Delegated to Rendering team and the new refactor discussions
UV2D: Planned/Blocker a High priority in Ui team
Material Effects: Delegated to Rendering team and the new refactor discussions

All of UV2D, ScreenSpace and Oveflow are technically tasks of the rendering team too with high priority (renderer-only changes). They are blockers to the work of the ui team, so they should be worked on as soon as possible.

Is there anyone that wants to take the lead for those one, two or three of the last mentioned features?

Also the rendering team lead should take note of the Material Effects and Material Pbm propositions, or explicitly dismiss them, as this RFC is now closed.

Thanks everyone.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants