-
Notifications
You must be signed in to change notification settings - Fork 202
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve scheduling by interleaving setup and teardown of an experiment #2560
Comments
What do you expect? The experiment needs to be prepared before it can run. If you have two different experiments then those steps should already run in parallel. Precompilation is already supported, please read the manual. |
It could hardly be the other way around. On the other hand, experiment stages are already pipelined (and there can be multiple pipelines at any one time.) See the manual. There is also already support for precompilation. |
Oh, race condition. Excuse me. |
What operating system is this? |
If it really matters (but you should make the case for it, everything is a trade-off against code complexity) it should be possible to create some worker processes in advance. |
Yes, but I can prepare for the next experiment while writing the results for the previous one.
Why not extend it for the same experiments (with different argument overwrites)?
Yes, but how would I tell the scheduler to exploit class Experiment(EnvExperiment):
def prepare(self):
self._run = self.precompile(self.run)
def run(self):
self._run() ?
We are using Linux amd64. Yes, you are right, a starting point is get a better understanding of the actual bottlenecks. I can provide an analysis in a few weeks (also testing the new NAC3 compiler). |
This should already be the case. The next-to-be-run experiment is already |
I just stumbled upon this topic, after we implemented precompilation into our experiments recently. @bodokaiser Your example code seems not to work in its current form. But this example down here eventually will class Experiment(EnvExperiment):
def build(self):
self.setattr_device("core")
def prepare(self):
super().prepare()
self._run = self.precompile(self.my_kernel)
@kernel
def my_kernel(self):
# Reset FIFO (just in case)
self.core.reset()
self.core.break_realtime()
# Do some kernel stuff on your core
# Wait, until core is ready for next run.
self.core.wait_until_mu(now_mu())
def run(self):
self._run() The FIFO reset is just there in case there is still an experiment running. Might be dropped though. Since we realized, writing this code into our experiments all over again becomes messy quite quickly, I wrote a decorator for it. @precompile
class BaseExperiment(EnvExperiment):
def build(self):
# Whatever you need for build
self.setattr_device("core")
@kernel
def run(self):
print("Base Experiment")
@precompile
class MyExperiment(EnvExperiment):
def build(self):
# Whatever you need for build
self.setattr_device("core")
self.base = BaseExperiment(self)
@kernel
def run(self):
self.base.run()
print("Derived Experiment") The source for the decorator code is here: import functools
ALLOW_COMPILE = True
def precompile(cls):
'''
Hooks into prepare method and adds precompilation + tracking of compilation status.
Tracking avoids sub-experiment individually getting compiled, if they were marked as precompiled too.
This saves time, since the main experiment embeds the sub-experiments and compiles the sequence as a whole, so there is no gain to precompile sub-experiments.
'''
@functools.wraps(cls, updated=())
class PrecompiledExperiment(cls):
def prepare(self):
global ALLOW_COMPILE # Get global compile state...
do_compile = ALLOW_COMPILE # back it up...
ALLOW_COMPILE = False # disable compilation for imported experiments, when running their respective prepare() method
try:
super().prepare() # run child experiment's prepare methods
finally:
ALLOW_COMPILE = do_compile # Reset global state to what it was before..
if do_compile: self._precompile() # And compile (if initially, global state was True)
@kernel
def _wrapped_kernel(self):
''' This kernel will get precompiled. It wraps around the original kernel and adds FIFO reset and wait_until_mu'''
# Reset FIFO
self.core.reset()
self.core.break_realtime()
# Run actual kernel
self.run()
# Must be here or new experiments/kernels will be added to core before any previous experiment has finished!
self.core.wait_until_mu(now_mu())
def _precompile(self):
''' Allows precompilation of the run() method already in the prepare method for faster execution when multiple experiments are scheduled '''
from time import time
if not hasattr(self.run, '__wrapped__'): return # If the run method is not decorated (e.g. by @kernel), skip precompilation.
if not hasattr(self,"core"): self.setattr_device("core") # Add core device, if not added yet
# Precompile run() method
start = time()
kern = self.core.precompile(self._wrapped_kernel)
print(f"PRECOMPILE time: {time() - start:.2f} s")
# Replace original run method with precompiled version (aka monkey patch)
self.run = kern
return PrecompiledExperiment |
That's exactly it. In fact, this is often the natural case, as the "previous experiment" might be a calibration experiment that updates some dataset with a new π time, frequency, etc. that the new experiment ought to pull in. One can of course opportunistically precompile experiments and then check afterwards whether any dataset/controller/… dependencies changed. It is possible to write a framework that achieves this as well, but baseline ARTIQ is a bit limited here, as you can't start to execute |
ARTIQ Feature Request
Problem this request addresses
Scheduling of experiment runs is not efficient, i.e., we "waste" about 1 second between experiment runs.
Describe the solution you'd like
Currently, the scheduler worker runs an experiment run's setup and teardown steps, e.g.,
exp.prepare
andwrite_results
, sequentially.Can we have a worker thread for each stage? Could we also add a precompilation step?
The text was updated successfully, but these errors were encountered: