-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add IPC API to support IPC handles #2
Conversation
f96e337
to
159ba77
Compare
#include <stdlib.h> | ||
|
||
struct umf_ipc_data_t { | ||
uint32_t size; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not uint64_t?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this variable holds the size of the umf_ipc_data_t
data structure. It is hardly possible that it is big. So I used uint32_t
to decrease the size of the IPC handle. Now I am thinking that even uint16_t
should be enough.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The umf_ipc_data_t
will still be 16B, no matter if you use uint32_t
, uint16_t
or uint64_t
due to alignment requirements.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. So there are two possible ways, use special pragmas to forbid the compiler from adding implicit padding or like you said use uint64_t. I think for now we can just simplify and use uint64_t and in the future, if we have a requirement to decrease the size of the handle we can modify the structure.
/// \param ipcHandle [in] IPC handle. | ||
/// \param ptr [out] pointer to the memory in the current process. | ||
/// \return UMF_RESULT_SUCCESS on success or appropriate error code on failure. | ||
enum umf_result_t umfOpenIPCHandle(umf_memory_pool_handle_t hPool, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should we define some requirements for the hPool
here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you mean that the pool should use the same memory provider like was used to create the handle on the producer side?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, unless there are some less-restrictive requirements we can come up with. Maybe it should use a provider that is able to access/copy the memory or something like this?
/// \param ipcHandle [out] returned IPC handle. | ||
/// \param size [out] size of IPC handle in bytes. | ||
/// \return UMF_RESULT_SUCCESS on success or appropriate error code on failure. | ||
enum umf_result_t umfGetIPCHandle(const void *ptr, umf_ipc_handle_t *ipcHandle, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better to rename it to create
to be consistent with other umf functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have the following considerations in mind: the same naming is used as CUDA and L0 uses today (both have a similar set of functions).
And the implementation of "get" function implies that handles can be cached and recurring requests might just return handle from the cache.
@@ -52,6 +52,12 @@ struct umf_memory_provider_ops_t { | |||
enum umf_result_t (*purge_force)(void *provider, void *ptr, size_t size); | |||
const char *(*get_name)(void *provider); | |||
bool (*supports_device)(const char *name); | |||
enum umf_result_t (*get_ipc_handle_size)(void *provider, size_t *size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should all providers implement these functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. The memory provider can just return UMF_RESULT_ERROR_NOT_SUPPORTED.
We also discussed with Igor that the memory provider interface (ops structure) has mandatory and optional functions. And for optional functions, UMF can provide the default implementation that always returns UMF_RESULT_ERROR_NOT_SUPPORTED error code.
return res; | ||
} | ||
|
||
// TODO: we have no cases with multiple memory providers |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok but how would you implement case where we have more than single provider?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we can extend memory tracker capabilities.
But the main question do we want to support multiple memory providers at all? That is why I keep this code with TODO comment. In the past when we discussed pool creation API with Igor, we briefly discussed whether it makes sense to allow multiple memory providers and what is the use case.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about the cases where the user wants a pool where some memory is allocated on CPU and the rest on disk? or GPU or sth else? Then you will have multiple providers.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what about the cases where the user wants a pool where some memory is allocated on CPU and the rest on disk? or GPU or sth else? Then you will have multiple providers.
I cannot imagine the use case for that. But even for such case, I think it should be still a single memory provider that is doing such magic internally. E.g. some special provider could be created that internally uses upstream memory providers for CPU, GPU, disk, etc. memory targets. The point is that from the pool perspective, it will be a single pool manager and single memory provider. The reason for that is that pool manager implementations can work with just a single memory provider.
Anyway, I think it is a topic for separate discussion, not for this PR. Let's talk about that in our sync.
This is the initial skeleton of IPC API. In the next PRs I am going to add caching functionality to minimize interaction with memory providers (GPPU drivers).