-
Notifications
You must be signed in to change notification settings - Fork 351
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm #12
Comments
Full stack traceback: GAN>py MNIST_GAN.py
|
I am running my code using (anaconda3) and (pycharm or jupyter notebook) under window 10 64bit My current problem is introduced below: I encountered the same problem, when I run my code with
But running without it is no problem. This mean I couldn't run my code with GPU, inspite of the
tells me True.
Traceback (most recent call last):
|
Check List
|
I am also facing this same issue, does anyone have a solution to it?? |
new_layer=new_layer.cuda() |
Did someone get a solution for this? |
@liangjiubujiu Can you specify what you mean? It used to work in previous versions of PyTorch. Can you send a fix or explain to me what you did to help the rest? |
@codeprb @Pratikrocks I am checking the solution, you can try running it on CPU if you remove all the |
Each tensor (input/custom intermediate ones created for project specific purpose) should be moved to the device in use - as mentioned in a previous reply by @liangjiubujiu
|
Hello, did u fix your error? I have the same problem when I used nn.linear(), i tested all my data that are exactly on my gpu while it raised |
The reason might be you are calculating Tensors from different devices. I got this problem when CrossEntropyLoss cpu and cuda Tensors :) |
I am trying to run this program, but it is returning
RuntimeError: Expected object of device type cuda but got device type cpu for argument #2 'mat1' in call to _th_addmm
.Note: I am using the notebook file as an actual python file.
The text was updated successfully, but these errors were encountered: