fix is_contiguous_logic issue in dlpack #668
Open
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Hello, I'm making good use of the open source you provided.
When I use it, I find an issue and fix it to contribute to your project
issue what I find
I want to use
set_shared_memory_region_from_dlpack
for GPU to GPU shared memoryI converted pytorch tensor to dlpack using by
to_dlpack
from dlpack library but it raises error in this linealthough I make pytorch tensor contiguous, after making it dlpack strides corrupts
it is known issue pytorch and the maintainer says it is not an issue but logic for preventing another issue.
So
to_dlpack
make stride in tensor's dimension shape is 1 just 1so, I fix is_contiguous logic to skip contiguous checking when shape is 1 and it work well in my code.
If the contribution process is in the project, I will follow it.
thanks