Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for float16 to HIP backend #280

Merged
merged 2 commits into from
Nov 28, 2024
Merged

Conversation

loostrum
Copy link
Member

The HIP backend failed with float16 numpy arrays as input. This is fixed by adding a float16 entry to the dtype map. ctypes does not have float16, but it is fine to use another type of the same size so I opted for int16.

Additionally, I have added myself to the Kernel Tuner author list as discussed with Ben.

@stijnh
Copy link
Member

stijnh commented Nov 28, 2024

While I think this PR is good, I am pretty sure that the HIP backend does not need to know the dtype of the arguments. It can just directly copy the bytes from the numpy array onto the GPU without looking at the data type. The pycuda backend does the same. I'm not sure why this check on the dtype exists in the first place.

@benvanwerkhoven
Copy link
Collaborator

Good point Stijn! I propose we merge this and then open an issue to discuss removing the check.

@benvanwerkhoven benvanwerkhoven merged commit 6ebf773 into master Nov 28, 2024
3 checks passed
@loostrum loostrum deleted the hip-support-float16 branch November 28, 2024 16:35
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants