-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[question] Transfering work units from a pool to another #383
Comments
Thanks for your question, and sorry for the late reply. Briefly speaking, I would do the following: while (1) {
ABT_thread threads[NUM_THREADS]; // Say NUM_THREADS = 64
size_t num = 0;
ABT_pool_pop_threads(pool1, threads, NUM_THREADS, &num);
if (num == 0)
break;
ABT_pool_push_threads(pool2, threads, num);
} Some details:
|
Thanks for your answer! I realized that if there are ULTs that are blocked ( |
Unfortunately, there's no way to pop ULTs that are blocked since internally, |
Assuming I have a pool P1, that I know (1) is not used by any scheduler/xstream at the moment, (2) has no currently running ULT or task that could return to it upon yielding, and (3) will not be used as an argument to
ABT_thread_create
or other functions, i.e. ULTs that are in the pool are going to stay there and no new ULTs will be pushed or created in that pool.I want to migrate the content of P1 to a pool P2 (which is in use by a scheduler/xstream, and has MPMC access). Is the following the right way of doing such a migration:
ABT_pool_get_size
to get the number of ULTs in P1ABT_pool_pop_threads
to pop them all out of P1ABT_pool_push_threads
to push them to P2In the above code, should I call
ABT_thread_set_associated_pool
before pushing the ULTs into P2?I also see the function
ABT_thread_migrate_to_pool
. Can it be used to replace 3? I see in the documentation "Request a migration of a work unit to a specific pool.", but nothing about when this migration would happen. In the case of non-running ULTs, would this migration happen immediately?The text was updated successfully, but these errors were encountered: