-
Notifications
You must be signed in to change notification settings - Fork 117
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SYCL][UR][L0] Access UMF pool handles through usm::pool_manager #905
base: main
Are you sure you want to change the base?
Changes from all commits
85c25cd
d251654
fb98b75
e874284
350a5dc
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -26,6 +26,7 @@ | |
#include "queue.hpp" | ||
|
||
#include <umf_helpers.hpp> | ||
#include <ur_pool_manager.hpp> | ||
|
||
struct ur_context_handle_t_ : _ur_object { | ||
ur_context_handle_t_(ze_context_handle_t ZeContext, uint32_t NumDevices, | ||
|
@@ -96,15 +97,8 @@ struct ur_context_handle_t_ : _ur_object { | |
|
||
// Store USM pool for USM shared and device allocations. There is 1 memory | ||
// pool per each pair of (context, device) per each memory type. | ||
std::unordered_map<ze_device_handle_t, umf::pool_unique_handle_t> | ||
DeviceMemPools; | ||
std::unordered_map<ze_device_handle_t, umf::pool_unique_handle_t> | ||
SharedMemPools; | ||
std::unordered_map<ze_device_handle_t, umf::pool_unique_handle_t> | ||
SharedReadOnlyMemPools; | ||
|
||
// Store the host memory pool. It does not depend on any device. | ||
umf::pool_unique_handle_t HostMemPool; | ||
usm::pool_manager<usm::pool_descriptor> PoolManager; | ||
usm::pool_manager<usm::pool_descriptor> ProxyPoolManager; | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I'm wondering if it wouldn't be better to just have a flag in pool_descriptor to decide if we do any pooling. We could then have only a single pool manager. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. But is it fitting to place such an option in the There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Why not? I think in the future, we will want to have even more options in pool_descriptor, e.g. related to access characteristics, etc. that would influence pooling. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I agree, this looks odd. If you decide to do otherwise, please add a comment. |
||
|
||
// Allocation-tracking proxy pools for direct allocations. No pooling used. | ||
std::unordered_map<ze_device_handle_t, umf::pool_unique_handle_t> | ||
|
@@ -252,3 +246,8 @@ struct ur_context_handle_t_ : _ur_object { | |
// mutex guarding the container with contexts because the context can be removed | ||
// from the list of tracked contexts. | ||
ur_result_t ContextReleaseHelper(ur_context_handle_t Context); | ||
|
||
// Template helper function for creating USM pools for given pool descriptor. | ||
template <typename P, typename... Args> | ||
std::pair<ur_result_t, umf::pool_unique_handle_t> | ||
createUMFPoolForDesc(usm::pool_descriptor &Desc, Args &&...args); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's now a helper for this sort of case: https://github.com/oneapi-src/unified-runtime/blob/main/source/ur/ur.hpp#L277
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't know if there's a benefit to using it since I call
umf::poolMakeUnique
that returns a pair anyway.