You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It might be useful if vector's execution policy constructor used the policy's underlying allocator as the vector's allocator when it makes sense to do so.
auto policy = par.on(exec);
vector<int, my_allocator> data(policy, 100, exec.allocator<int>());
In cases where my_allocator is constructible from policy.executor().allocator<int>(). In other cases, the vector's allocator would just be default constructed as usual.
This sort of syntax would make it convenient to construct a vector with affinity to a particular GPU fairly easily:
We might also consider a basic_vector<T, ExecutionPolicy, Allocator = some default> type for embedding a default execution policy to use instead of seq for vector methods overloads without an ExecutionPolicy parameter. The default used for Allocator would probably be whatever allocator is associated with ExecutionPolicy's executor.
With this, we could introduce aliases such as cuda::vector<T> = basic_vector<T,cuda::parallel_policy>.
Something equivalent to thrust::device_vector could be done similarly, and would use cuda::device_allocator for its choice of allocator type.
It might be useful if
vector
's execution policy constructor used the policy's underlying allocator as thevector
's allocator when it makes sense to do so.For example, this syntax:
Could be made equivalent to this syntax:
In cases where
my_allocator
is constructible frompolicy.executor().allocator<int>()
. In other cases, thevector
's allocator would just be default constructed as usual.This sort of syntax would make it convenient to construct a
vector
with affinity to a particular GPU fairly easily:The text was updated successfully, but these errors were encountered: