Replies: 6 comments
-
Thanks a lot for looking into this! It will probably take some time for us to really understand what's going on and where the highest gains can be expected |
Beta Was this translation helpful? Give feedback.
-
To be clear, it's not 100% clear even to me what will work and what won't. i have some experience from Revise & Julia itself, but that's not enough for me to say with confidence that I can see a complete path forward. In general my experience has suggested that getting rid of inference problems helps in and of itself (better performance) and give you less vulnerability to invalidation and perhaps more effective precompilation. It probably isn't strictly necessary to fix all inference problems to reduce inference time, but just generally having fewer MethodInstances tends to correlate with less inference time and better reusability of previously-inferred (and hopefully, cached) MethodInstances. And of course if the user makes some plots (paying latency cost) and then loads some new packages that invalidate much of Makie's code, the next plot is going to pay latency again. So on balance, I think it's probably worth doing even if there might be an easy shortcut that would kind of work without doing it. If you're not aware, the main tools I've found useful for this are
I should also say I have quite a few commitments over the next 1.5 months and so really should limit how much time I put into this. |
Beta Was this translation helpful? Give feedback.
-
Thinking a bit more about this, @extract plot (color, colormap, colorrange, alpha) could generate something like color = plot.attributes[:color]::Observable{Symbol}
colormap = plot.attributes[:colormap]::Observable{Any}
colorrange = plot.attributes[:colorrange]::Observable{Any}
alpha = plot.attributes[:alpha]::Observable{Float32} which would solve a lot of your type-inference problems very easily---all you'd need to do is change the But I'd need someone who knows this package to generate that |
Beta Was this translation helpful? Give feedback.
-
Yeah, the problem is, that some of those have quite a large range of possible input types, and my philosophy has been to convert as late as possible, to enable backends to be as close to the user input as desired (very helpful, when a backend accepts some of the user input data directly without conversions and is faster that way). But, I'm sure, we can improve this quite a bit. E.g, there are definitely some attributes we can convert early - and for the others, we could probably use a wrapper type, that then also comes out with an exact type all the time. Btw, |
Beta Was this translation helpful? Give feedback.
-
Yeah, it would be fine to leave some untyped. Generally, once type-specification starts causing feelings of stress that's when I can tell I should just stop putting types on 😄. |
Beta Was this translation helpful? Give feedback.
-
It should be noted that this whole discussion started before the package precompilation work in 1.8 and 1.9. The high premium on inferrability is no longer an issue for effective precompilation, although it is an issue for invalidation. I have observed some package interactions that greatly increase Makie's latency, but I haven't made a systematic study. Given how much both Makie internals and Julia itself have changed, I think it would be fine to close this. But I'll let the Makie devs decide. |
Beta Was this translation helpful? Give feedback.
-
My estimates suggest that about half the latency of Makie is due to type-inference and the other half to native code generation. While currently there's little short of PackageCompiler that can be done about native code, inference results can, under some circumstances, be cached in the precompile file. In recent conversations, a tentative plan has emerged:
Here's a running list of things I'm finding to worry about in terms inferrability:
@extract
results in all of its variables being inferred as::Any
. Are the two choicesAttributes
andObservable{Any}
? Could we even use constant-propagation on the name to allow inference to know which of these it will be? (xref https://timholy.github.io/SnoopCompile.jl/stable/snoopr/#Inferrable-field-access-for-abstract-types-1)getproperty
methods that "casually" combine field access and attribute access (https://github.com/JuliaPlots/AbstractPlotting.jl/blob/22410e92fd2bbad51e99cfccbea277941b17d4aa/src/makielayout/lobjects/lobject.jl#L13-L19) break inference for the actual fields of the object. One potential fix is to define them inside an@eval
block for each specific type so that constant-propagation allows inference to succeed. (similar issues to the previous point) xref Reworkgetproperty
for LObjects to be inferrable for fields JuliaPlots/AbstractPlotting.jl#507lift(args, typ=T)
does not infer as returning anObservable{T}
. Considerlift(Observable{T}, args...)
?A sometimes-useful trick for figuring out what type objects actually are in
do
-blocks: from@snoopi
, the MethodInstance gives the specific types it was compiled with.mi.def
shows the method (including file/line number). Combining the two, you can annotate the arguments of thedo
-block. However, there's a risk: the instances you compiled may not be the full set of possible instances. So it's much better to proceed from actual awareness of what the possible types are.Beta Was this translation helpful? Give feedback.
All reactions