Just pro-cons on tech choices and little things I don't want to forget whil implementing renderling
.
- why are there repeats of nodes in document.nodes?
- sharing code on CPU and GPU
- sanity testing GPU code on CPU using regular tests
- ability to run shaders on either CPU or GPU and profile
- it's Rust
- using cargo and Rust module system
- expressions!
- type checking!
- traits!
- editor tooling!
can't use enums (but you can't in glsl or hlsl or msl or wgsl either)you can but they must be simple (like#[repr(u32)]
)struct layout size/alignment errors can be really trickysolved by using a slab- rust code must be no-std
- don't use
while let
orwhile
loops - for loops are hit or miss, sometimes they work and sometimes they don't
- can't use
.max
or.min
on integers - meh, but no support for dynamically sized arrays (how would that work in no-std?)
- can't use bitwise rotate_left or rotate_right
- sometimes things like indexing are just funky-joe-monkey:
- cannot use shader entry point functions nested within each other
- if your shader crate is just a library and has no entry points it cannot have the
crate-type = ["rlib", "dylib"]
Cargo.toml annotation or you will get "Undefined symbols" errors - no recursion! you must convert your recursive algos into ones with manually managed stacks
usize
isu32
ontarget_arch = "spirv"
! Watch out for silent shader panics caused by wrapping arithmetic operations.
- works on all platforms with the same API
- much more configurable than OpenGL
- much better error messages than OpenGL
- less verbose than Vulkan
- the team is very responsive
- no support for arrays of textures on web, yet
- atomics are not supported in the Naga SPIRV frontend, which limits the capabilities of compute
- lots of other graphics libs use it
- speed
- different types on SIMD, with different structures - like Vec4 is a struct on my macos but it's a tuple on SIMD linux.
- bindless - wth exactly is it
location[...] is provided by the previous stage output but is not consumed as input by this stage.
- rust-gpu has optimized away the shader input, you must use the input parameter in your downstream shader
- sometimes the optimization is pretty agressive, so you really gotta use the input
- Forward+ shading (as opposed to deferred)
tl;dr
In a compute shader before the vertex pass:
- break up the frame into tiles
- for each tile compute which lights' contribution to the pixels in the tile
- during shading, iterate over only the lights for each pixel according to its tile
- Help inspecting buffers in Xcode
- command that includes some vulkan debugging stuff
- VK_LOADER_LAYERS_ENABLE='*validation' VK_LAYER_ENABLES=VK_VALIDATION_FEATURE_ENABLE_DEBUG_PRINTF_EXT DEBUG_PRINTF_TO_STDOUT=1
- When generating mipmaps I ran into a problem where sampling the original texture was always coming up [0.0, 0.0 0.0, 0.0]. It turns out that the sampler was trying to read from the mipmap at level 1, and of course it didn't exist yet as that was the one I was trying to generate. The fix was to sample a different texture - one without slots for the mipmaps, then throw away that texture.
- wrote an NLNet grant proposal to add atomics to
naga
's spv frontend- roughly to complete this PR gfx-rs/naga#2304
- fixing
wgpu
's vulkan backend selection on macOS