You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
While visualizing delays from an identical starting point - effectively overlapping each other - through scopes manually added to puffin::Stream, the resulting scopes get summed and show a prolonged track. This is an example registering the same 500ms child workload - starting at 0 - three times:
Likewise we have GPU workloads where the next command buffer starts running ahead of the previous one completing. Here too - albeit with different names - their entire track gets prolonged to fit every item on the line, even when it exceeds the parent Context `frame 0` Command buffer `2` parent scope despite setting explicit start and end timings for a scope.
(I take no responsibility for three different pipelines in the same frame having both a space, hyphen, and underscore 🤣)
Describe the solution you'd like
I expected either a panic/Err() because of submitting invalid data through the puffin::global_reporter, and not initially knowing that - presumably - profiling submitted for "threads" (a CPU thread in the literal sense) is assumed to run serially. Ie. if the start of the next sibling scope lies before the end of the current, that should be an error?
Describe alternatives you've considered
It'd be great if puffin could somehow visualize these overlapping scopes, maybe a color or pattern to display overdraw? Displaying on multiple tracks is bound to be tricky, hard to see, and pretty much breaks the "flamegraph" concept. Perhaps a different waterfall view like Radeon Graphics Profiler could be considered? This may need a different kind of "profiling mode" to allow such kind of overlaps though.
The text was updated successfully, but these errors were encountered:
I agree that returning an error here makes sense. I don't think puffin should return early though but instead let the user know the data is invalid after the fact.
A radeon gpu profiler like view makes sense as well. Definitely open for contributions for that even if its just for egui and not puffin_viewer!
Is your feature request related to a problem? Please describe.
While visualizing delays from an identical starting point - effectively overlapping each other - through scopes manually added to
puffin::Stream
, the resulting scopes get summed and show a prolonged track. This is an example registering the same 500mschild
workload - starting at0
- three times:Likewise we have GPU workloads where the next command buffer starts running ahead of the previous one completing. Here too - albeit with different names - their entire track gets prolonged to fit every item on the line, even when it exceeds the parent
Context `frame 0` Command buffer `2`
parent scope despite setting explicit start and end timings for a scope.(I take no responsibility for three different pipelines in the same frame having both a space, hyphen, and underscore 🤣)
That's done by:
puffin/puffin/src/merge.rs
Lines 128 to 133 in 17d0429
This is somewhat related to GPU profiling in #59.
Describe the solution you'd like
I expected either a
panic
/Err()
because of submitting invalid data through thepuffin::global_reporter
, and not initially knowing that - presumably - profiling submitted for "threads" (a CPU thread in the literal sense) is assumed to run serially. Ie. if the start of the next sibling scope lies before the end of the current, that should be an error?Describe alternatives you've considered
It'd be great if puffin could somehow visualize these overlapping scopes, maybe a color or pattern to display overdraw? Displaying on multiple tracks is bound to be tricky, hard to see, and pretty much breaks the "flamegraph" concept. Perhaps a different waterfall view like Radeon Graphics Profiler could be considered? This may need a different kind of "profiling mode" to allow such kind of overlaps though.
The text was updated successfully, but these errors were encountered: