Skip to content

Commit

Permalink
Fix automatic CUDA graphing not working when requiring backwards (#120)
Browse files Browse the repository at this point in the history
* Pass posTensor as input argument to energyTensor.backwards.
This instructs torch to compute gradients only with respect to positions.

* Add comments
  • Loading branch information
RaulPPelaez authored Sep 20, 2023
1 parent a0b51a9 commit 2270256
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion platforms/cuda/src/CudaTorchKernels.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,11 @@ static void executeGraph(bool outputsForces, bool includeForces, torch::jit::scr
energyTensor = module.forward(inputs).toTensor();
// Compute force by backpropagating the PyTorch model
if (includeForces) {
energyTensor.backward();
// CUDA graph capture sometimes fails if backwards is not explicitly requested w.r.t positions
// See https://github.com/openmm/openmm-torch/pull/120/
auto none = torch::Tensor();
energyTensor.backward(none, false, false, posTensor);
// This is minus the forces, we change the sign later on
forceTensor = posTensor.grad().clone();
// Zero the gradient to avoid accumulating it
posTensor.grad().zero_();
Expand Down

0 comments on commit 2270256

Please sign in to comment.