-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add Scalar backend #306
base: dev-master
Are you sure you want to change the base?
Add Scalar backend #306
Conversation
Codecov Report
@@ Coverage Diff @@
## dev-master #306 +/- ##
==============================================
+ Coverage 13.75% 17.37% +3.61%
==============================================
Files 282 274 -8
Lines 55158 57728 +2570
Branches 24712 26747 +2035
==============================================
+ Hits 7588 10030 +2442
+ Misses 41574 39143 -2431
- Partials 5996 8555 +2559 see 70 files with indirect coverage changes 📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more |
This need to be careful to not mess-up the gradient part. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is it possible to not using switch? I think torch must have their own function you can call?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Since we're dealing with cytnx types. I think we cannot use pytorch.
Not fully tested, at least compile passed.
It's important to review SProxy, since we're not using straightforward way to write into memory.
Instead, we first convert memory to at::Tensor, then use
tn[loc] = elem;