Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[HIP] Support managed allocations in USM free #1193

Merged
merged 1 commit into from
Feb 2, 2024

Conversation

npmiller
Copy link
Contributor

@npmiller npmiller commented Dec 14, 2023

In ROCm 5.7.1 some USM allocations are reported as managed, these are freed in the same way as device allocations.

Testing in intel/llvm#12380

@npmiller npmiller requested a review from a team as a code owner December 14, 2023 17:51
@npmiller npmiller requested a review from ldrumm December 14, 2023 17:51
Copy link
Contributor

@GeorgeWeb GeorgeWeb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@GeorgeWeb
Copy link
Contributor

@npmiller This PR is fine as is. I just have a question that ties into the now finally used mananged memory type from ROCm v5.7.1 onwards. So I am wondering whether we need to address this also in urUSMGetMemAllocInfo where hipMemoryTypeManaged also isn't being checked as a valid memory object type.

@npmiller
Copy link
Contributor Author

It does check this:

      Value = hipPointerAttributeType.isManaged;
      if (Value) {
        // pointer to managed memory
        return ReturnValue(UR_USM_TYPE_SHARED);
      }

I'm not sure if it's exactly the same thing though

@GeorgeWeb
Copy link
Contributor

GeorgeWeb commented Jan 12, 2024

It does check this:

      Value = hipPointerAttributeType.isManaged;
      if (Value) {
        // pointer to managed memory
        return ReturnValue(UR_USM_TYPE_SHARED);
      }

I'm not sure if it's exactly the same thing though

The isManaged member is set for pointers to managed memory via the unified memory allocator in HIP so it should be okay. My bad I didn't spot the if (Value) was checking isManaged.

@codecov-commenter
Copy link

codecov-commenter commented Jan 15, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Comparison is base (b66cf9b) 15.46% compared to head (3173420) 15.46%.
Report is 3 commits behind head on main.

❗ Your organization needs to install the Codecov GitHub app to enable full functionality.

Additional details and impacted files
@@           Coverage Diff           @@
##             main    #1193   +/-   ##
=======================================
  Coverage   15.46%   15.46%           
=======================================
  Files         238      238           
  Lines       33883    33883           
  Branches     3747     3747           
=======================================
  Hits         5239     5239           
  Misses      28593    28593           
  Partials       51       51           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@EwanC
Copy link
Contributor

EwanC commented Jan 24, 2024

I bumped into an issue locally with shared USM while trying out HIP support for SYCL-Graphs on ROCM 5.7.1, and this patch fixed it, would be helpful to get this merged (or tagged ready-to-merge)

@GeorgeWeb GeorgeWeb added the ready to merge Added to PR's which are ready to merge label Jan 24, 2024
@npmiller npmiller force-pushed the managed-alloc branch 2 times, most recently from 4736a58 to 3173420 Compare January 26, 2024 11:29
@kbenzie
Copy link
Contributor

kbenzie commented Jan 29, 2024

Rebasing should resolve the unrelated check failures.

In ROCm 5.7.1 some USM allocations are reported as managed, these are
freed in the same way as device allocations.
@kbenzie kbenzie merged commit 2974c52 into oneapi-src:main Feb 2, 2024
52 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready to merge Added to PR's which are ready to merge
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants