Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Consider syntactic sugar for vector placment #390

Open
jaredhoberock opened this issue Jun 13, 2017 · 1 comment
Open

Consider syntactic sugar for vector placment #390

jaredhoberock opened this issue Jun 13, 2017 · 1 comment

Comments

@jaredhoberock
Copy link
Collaborator

It might be useful if vector's execution policy constructor used the policy's underlying allocator as the vector's allocator when it makes sense to do so.

For example, this syntax:

vector<int, my_allocator> data(par.on(exec), 100);

Could be made equivalent to this syntax:

auto policy = par.on(exec);
vector<int, my_allocator> data(policy, 100, exec.allocator<int>());

In cases where my_allocator is constructible from policy.executor().allocator<int>(). In other cases, the vector's allocator would just be default constructed as usual.

This sort of syntax would make it convenient to construct a vector with affinity to a particular GPU fairly easily:

vector<int, device_allocator<int>> data(par.on(device(1)), 100);
@jaredhoberock
Copy link
Collaborator Author

jaredhoberock commented Jun 13, 2017

We might also consider a basic_vector<T, ExecutionPolicy, Allocator = some default> type for embedding a default execution policy to use instead of seq for vector methods overloads without an ExecutionPolicy parameter. The default used for Allocator would probably be whatever allocator is associated with ExecutionPolicy's executor.

With this, we could introduce aliases such as cuda::vector<T> = basic_vector<T,cuda::parallel_policy>.

Something equivalent to thrust::device_vector could be done similarly, and would use cuda::device_allocator for its choice of allocator type.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant