Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add general integer exponents #128

Merged
merged 17 commits into from
Jan 17, 2025
Merged

Add general integer exponents #128

merged 17 commits into from
Jan 17, 2025

Conversation

kvhuguenin
Copy link
Contributor

@kvhuguenin kvhuguenin commented Dec 16, 2024

Add exponents >3 to the main branch.
The core changes in the potential have already been implemented.
The most important change that still has to be implemented and tested in the rest of the code is that exponent now is of data type int rather than float.
Also, we need some good unit tests. I was thinking of a combination of regression tests for some of the more complicated structures + generalized Madelung constants that I have computed in the past with a different code.

Contributor (creator of pull-request) checklist

  • Tests updated (for new features and bugfixes)?
  • Documentation updated (for new features)?
  • Issue referenced (for PRs that solve an issue)?

Reviewer checklist

  • CHANGELOG updated with public API or any other important changes?

📚 Documentation preview 📚: https://torch-pme--128.org.readthedocs.build/en/128/

Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As test I would just do a regression test for the function.

@@ -17,6 +17,27 @@ def gamma(x: torch.Tensor) -> torch.Tensor:
return torch.exp(gammaln(x))


# Auxilary function for stable Fourier transform implementation
def gammainc_upper_over_powerlaw(exponent, zz):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Give the equation in the docstring here.

src/torchpme/potentials/inversepowerlaw.py Outdated Show resolved Hide resolved
Copy link
Contributor

@E-Rum E-Rum left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but there are some things that need to be double-checked. I will continue updating the tests once we fix them.

if exponent == 2:
return torch.sqrt(torch.pi / zz) * torch.erfc(torch.sqrt(zz))
if exponent == 3:
return -torch.expi(-zz)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As far as I know torch.expi doesn't exist in torch module ...

Comment on lines 21 to 43
def gammainc_upper_over_powerlaw(exponent, zz):
if exponent not in [1, 2, 3, 4, 5, 6]:
raise ValueError(f"Unsupported exponent: {exponent}")

if exponent == 1:
return torch.exp(-zz) / zz
if exponent == 2:
return torch.sqrt(torch.pi / zz) * torch.erfc(torch.sqrt(zz))
if exponent == 3:
return -torch.expi(-zz)
if exponent == 4:
return 2 * (
torch.exp(-zz) - torch.sqrt(torch.pi * zz) * torch.erfc(torch.sqrt(zz))
)
if exponent == 5:
return torch.exp(-zz) + zz * torch.expi(-zz)
if exponent == 6:
return (
(2 - 4 * zz) * torch.exp(-zz)
+ 4 * torch.sqrt(torch.pi) * zz**1.5 * torch.erfc(torch.sqrt(zz))
) / 3


Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we double-check that everything here is correct? With the new implementation, test_values_ewald.py fails for inverse potentials with an exponent of 1, as it does not coincide with the Madelung constant.

@E-Rum
Copy link
Contributor

E-Rum commented Dec 26, 2024

The current status is as follows:

For arbitrary odd integer exponents ( p >=3 ), we need the exponential integral, which is not implemented in PyTorch. The current solution is to use the existing SplinedPotential to model 1/r^p type potentials. However, before fully adopting it, I plan to use the scipy module, which includes the exponential integral, to verify that the values generated by SplinedPotential are consistent with the theoretical ones.

However, there is a specific issue when p = 3. In this case, certain variables become degenerate, leading to situations where we raise values to the power of zero and even encounter division by zero (e.g. background_correction function).

Can someone provide insights into this issue and suggest potential solutions?

…functions in the InversePowerLawPotential class
@E-Rum
Copy link
Contributor

E-Rum commented Jan 3, 2025

In the last commit, I made the following changes:
I temporarily inserted a SciPy function that contains the exponential integral into the existing InversePowerLawPotential class. Even though this breaks PyTorch differentiability, it temporarily allows us to have a reference value.

As discussed with @ceriottm, one possible solution to overcome the absence of the exponential integral in PyTorch is to implement it using a spline potential. Therefore, I also drafted how this could be implemented in a separate integerspline.py module.

So far, the numerically splined implementation coincides with the reference SciPy one with good accuracy.

However, we still need to discuss what to do with p <= 3. From what I found in the literature, our current implementation is correct for p>3. However, for p<=3, one should be careful as certain constraints (e.g., charge or dipole neutrality) need to be enforced (this is apperently why we are having trouble right now in the code with p = 3). Please correct me if I misinterpreted this while reviewing the literature.

If that’s the case, I think we should add some constraints in the code for the above-mentioned exponents.

@ceriottm
Copy link
Contributor

ceriottm commented Jan 5, 2025

In the last commit, I made the following changes: I temporarily inserted a SciPy function that contains the exponential integral into the existing InversePowerLawPotential class. Even though this breaks PyTorch differentiability, it temporarily allows us to have a reference value.

As discussed with @ceriottm, one possible solution to overcome the absence of the exponential integral in PyTorch is to implement it using a spline potential. Therefore, I also drafted how this could be implemented in a separate integerspline.py module.

So far, the numerically splined implementation coincides with the reference SciPy one with good accuracy.

However, we still need to discuss what to do with p <= 3. From what I found in the literature, our current implementation is correct for p>3. However, for p<=3, one should be careful as certain constraints (e.g., charge or dipole neutrality) need to be enforced (this is apperently why we are having trouble right now in the code with p = 3). Please correct me if I misinterpreted this while reviewing the literature.

If that’s the case, I think we should add some constraints in the code for the above-mentioned exponents.

The idea of the "background correction" is to add a uniform density that cancels out the total pseudocharge making the system overall neutral. I see no reason why this wouldn't be doable for p=2,3 so there might be an error in the expression for p=3

@E-Rum
Copy link
Contributor

E-Rum commented Jan 6, 2025

This is from
Williams, D. E. (1989). Accelerated Convergence Treatment of R−n Lattice Sums. Crystallography Reviews, 2(1), 3–23. https://doi.org/10.1080/08893118908032944

Screenshot 2025-01-06 at 14 07 56

Thus, there are indeed some restrictions for p<3, which might be causing the divergences. Alternatively, I might have misunderstood the issue, and we may have implemented the background_correction incorrectly.

@E-Rum
Copy link
Contributor

E-Rum commented Jan 6, 2025

Okay, implementing a custom backpropagation-compatible PyTorch exponential integral turned out to be not that hard(thanks @kvhuguenin for noticing it), so it’s done. Now, we need to resolve the background_correction problem and add tests. I will add tests soon to compare the values and derivatives of the InversePowerLawPotential with the splined implementation.

@E-Rum
Copy link
Contributor

E-Rum commented Jan 8, 2025

I added tests to compare theoretical values and their derivatives with the splined potential for exponents 4, 5, and 6. I also decided to delete the SplinedInteger potential, as it has become redundant now that we have theoretical backpropagatable integer potentials. The only thing left to check is the background correction and what’s happening with p = 3 , and then it will be ready to merge.

Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!

src/torchpme/potentials/inversepowerlaw.py Outdated Show resolved Hide resolved
src/torchpme/potentials/inversepowerlaw.py Outdated Show resolved Hide resolved
E-Rum and others added 7 commits January 9, 2025 15:08
…Compatibility with Potentials (#143)

* Refactor parameter handling in calculators and potentials for improved dtype and device management
* Updated docstrings and changelog, added an assertion to check for an instance of the potential, and resolved the TorchScript Potential/Calculator incompatibility.
* Update changelog and add test for potential and calculator compatibility
Copy link
Contributor

@PicoCentauri PicoCentauri left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very nice. I have just a suggestion to keep the repo a bit clean.

Thanks @E-Rum !

src/torchpme/potentials/inversepowerlaw.py Outdated Show resolved Hide resolved
@E-Rum E-Rum merged commit 3540a81 into main Jan 17, 2025
12 checks passed
@E-Rum E-Rum deleted the exponents branch January 17, 2025 15:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants