You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
There is a need to be able to share the same parsec context between multiple libraries. Typically MADNESS (over parsec) and TTG (also over parsec) will need to co-exist in some applications.
In the current approach, one library (MADNESS in this case) initiates the parsec context and exposes it via a function in its API, and the other library (TTG in this case) uses this same context to support its execution.
Separation is guaranteed by the use of different taskpools for both libraries.
In the current implementation, the use of parsec is hidden by both libraries: the user-facing API of both libraries does not expose a parsec context. This makes sense for both libraries, because both are ported over multiple runtime systems, and ideally the application developer should not need to be aware of which runtime system is used for a given application.
This creates two issues:
library A does not know if library B has been initialized before or not. Said otherwise, we don't have an MPI_Initialized() function, or a parsec_get_default_context() one. Is this something that we could/would provide? Or how do we propose different libraries to solve this issue?
library A has initialized the parsec context and has found a way to share it with library B. But library A has not initialized it like library B wants it (typically, MADNESS would initialize parsec with MPI_COMM_SELF by default has it doesn't use parsec to communicate, but TTG needs to change that comm to a dup of MPI_COMM_WORLD). The problem is that library A has a taskpool already registered in the shared parsec context, so changing the context is not an option... I don't see a workaround this one: it looks to me that either A and B need to negotiate beforehand which communicator to use for both, or A and B should have a function to remove its taskpool from the context, and re-create it / re-add it to the context.... Any better idea?
The text was updated successfully, but these errors were encountered:
problem: a sizable graph of components with multiple vertices that use parsec someone (safest to use root) must initialize parsec. but every vertex (component) must be able to initialize itself properly whether it's part of the graph or not. making implicit parsec context available (a la CUDA runtime API, as opposed to the driver API) alleviate the need for every code to keep track of the context and provide appropriate setters/getters. In our example TTG has a setter (parsec context is given as a parameter to ttg::initialize), MADNESS provides both setter and getter (too ugly to mention in respectable company) and TA has its own ... having standard interface for dealing with the most common/optimal scenario (singleton parsec context) would be useful.
There is a need to be able to share the same parsec context between multiple libraries. Typically MADNESS (over parsec) and TTG (also over parsec) will need to co-exist in some applications.
In the current approach, one library (MADNESS in this case) initiates the parsec context and exposes it via a function in its API, and the other library (TTG in this case) uses this same context to support its execution.
Separation is guaranteed by the use of different taskpools for both libraries.
In the current implementation, the use of parsec is hidden by both libraries: the user-facing API of both libraries does not expose a parsec context. This makes sense for both libraries, because both are ported over multiple runtime systems, and ideally the application developer should not need to be aware of which runtime system is used for a given application.
This creates two issues:
MPI_Initialized()
function, or aparsec_get_default_context()
one. Is this something that we could/would provide? Or how do we propose different libraries to solve this issue?The text was updated successfully, but these errors were encountered: