Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parallelize symbolic execution of tests #2009

Open
tothtamas28 opened this issue Aug 10, 2023 · 2 comments
Open

Parallelize symbolic execution of tests #2009

tothtamas28 opened this issue Aug 10, 2023 · 2 comments
Labels
engagement Related to an ongoing engagement enhancement New feature or request

Comments

@tothtamas28
Copy link
Contributor

tothtamas28 commented Aug 10, 2023

This allows better utilization in case the number of tests remaining is smaller than the number of workers. (In particular if there's a single test.)

Design constraints

  1. Although kore-rpc is able to handle requests in parallel, each proof will probably need its own KoreServer instance. This is because pyk.proof.reachability.APRProver calls add_dependencies_module in its initializer. This limitation need further investigation though, maybe there's a way to prevent interference between proofs on the same server.
  2. Due to the Global Interpreter Lock in CPython, we either need to ensure that Python code has minimal overhead, or use process-level parallelism instead of threads. This means that we need to ensure pickleability of objects that are sent between processes.

Draft

This is based on pyk.kore.pool.KoreServerPool (design constraint 1) that currently uses concurrent.futures.ThreadPoolExecutor internally. As mentioned, this will probably have be replaced by ProcessPoolExecutor (design constraint 2).

Main thread initializes the pool with some initial tasks, and then processes results as they become available in a loop. Processing a result involves:

  • Persisting proof results to disk: as this is handled by a single thread, the data model does not have to be thread-safe.
  • Submitting new tasks to the pool.
# pending is a collection of futures
while pending:
    # pick a done future to process
    current_future = next((future for future in pending if future.done()), None)

    # if there is none, sleep for a while then retry
    if current_future is None:
        time.sleep(0.1)
        continue

    # get the result - if an exception occurred in in the thread, it will be raised
    result = current_future.result()
    # update the corresponding proof on disk
    write_data_to_disk(result)

    # schedule new tasks
    next_tasks = get_next_tasks(result)
    for fn, args in next_tasks:
        next_future = pool.submit(fn, *args)
        pending.append(next_future)

    # this task is completed
    pending.remove(current_future)

Ideally, granularity of a task should be a single proof step (i.e. an RPC call and as much of the necessary pre- and post-processing as makes sense). This is the most challenging part of the issue: it requires countrparts of KCFGExplore, APRPRover, kevm_prove, etc. that 1) enable fine-grained control of the proof process 2) do not write to disk. This suggests a prover interface like:

class Prover:
    def advance_proof(self, proof_state: ProofState) -> ProofStep:
        # Does a single RPC call based on proof state and prover configuration. Does not write to disk.
        ...

Further considerations

Task scheduling

It would be good to have control on task scheduling. For example, in the simplest case, tests are assigned a unique priority, and each task inherits the priority of the test it belongs to. This means that a test with a lower priority can only run if tests with a higher priority don't have enough tasks scheduled to utilize all workers. In a good design, such scheduling strategies can be provided by the caller (e.g. in form of a callable that takes a task and returns its priority).

Unfortunately Executor implementations take tasks from Queue, so tasks are processed in FIFO order. But maybe there's a way to wrap the executor and use an additional PriorityQueue:

  • Keep track of executor utilization (use Future.add_done_callback).
  • Keep submitted tasks in a PrioirityQueue. Only submit to the executor if it has an empty queue.

Graceful shutdown

On an exception or user request, the prover should shut down gracefully. In particular:

  • Properly close all resources (the pool, servers, clients, etc).
  • Write data from all done tasks to disk.
@anvacaru anvacaru changed the title Parallelize symbolic execution of tests Parallelize symbolic execution of testsz Oct 2, 2023
@tothtamas28 tothtamas28 changed the title Parallelize symbolic execution of testsz Parallelize symbolic execution of tests Oct 5, 2023
@yale-vinson yale-vinson added enhancement New feature or request engagement Related to an ongoing engagement labels Oct 11, 2023
@yale-vinson
Copy link

Issue blocked by Backend Booster #383

@yale-vinson
Copy link

@nwatson22 As Backend Booster #383 is closed, should we unblock this and close it out as well?

@nwatson22 nwatson22 removed their assignment May 23, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
engagement Related to an ongoing engagement enhancement New feature or request
Projects
None yet
3 participants