-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests for numpy, tensorflow, torch, and mxnet #106
Conversation
It would be nice to get some eyes on this. Once there are tests for the current behaviour using different libraries, it should be easier to judge the impact of the changes in #104 |
I am wondering about this: If we include this test set in the array API tests (@asmeurer), maybe each library can run it in their own CI so we don't have to worry about GPU access? (cc: @kmaehashi) |
I'm not sure how the DLPack test suite should work in relation to the array API tests test suite. My initial thought is that it makes sense for DLPack to have its own test suite, since DLPack is independent of the array API. I believe we currently don't have any DLPack tests in the array API test suite beyond just basic testing if the methods exist and run without erroring. If DLPack itself is tested extensively by its own test suite, then perhaps the array API test suite can just defer to that, or pull it in somehow. |
I am thinking the tests from this PR (and future PRs) should have been added to each library's test suite when they claim they support DLPack. Without a universal test in this repo that can target every library I worry that we're just reinventing the wheel here... But, there's no way to write a universal test without an agreed-upon interface, and the array API could serve as a good proxy. |
To add onto @leofang's comment, for some given library (e.g. NumPy), it is not easy to test the DLPack support against other tensor libraries (e.g. PyTorch) since it would add extra dependencies for the project and affect the CI time. |
Sorry I didn't read Tirth's code until now. This PR essentially proposes to do an N-to-N conversion test across N libraries. The test suite would be hard to maintain unless 1. we use the array API to be backend-agnostic, or 2. we have an automation to auto-generate the added test scripts. |
Perhaps in the future, if N becomes large, it might be hard to maintain the tests. For now, with N = 4 there are ~16 tests. Each library has a different pattern for using DLPack, so it is not easy to see how to automate it today. It would be good to put this in now, and automate it when N grows larger and the use of DLPack becomes more standardized. |
@tqchen Any thoughts on this? |
This PR adds basic tests for some python packages that support DLPack. Currently, it only tests CPU arrays. Some observations
from_dlpack
function and both Tensorflow and MXNet don't have the__dlpack__
,__dlpack_version__
dunder methods. These need to be updated.More tests can be added in follow-up PRs. These tests are triggered for each commit on main/pr. Should we instead schedule them to run every week or month? GPU isn't available so I am not sure if we can test libraries like CuPy on the CI. Is is possible to emulate GPU or other devices in the CI?