Skip to content

Releases: deepmodeling/deepmd-kit

v3.0.0

23 Nov 08:10
e695a91
Compare
Choose a tag to compare

DeePMD-kit v3: Multiple-backend Framework, DPA-2 Large Atomic Model, and Plugin Mechanisms

After eight months of public tests, we are excited to present the first stable version of DeePMD-kit v3, an advanced version that enables deep potential models with TensorFlow, PyTorch, or JAX backends. Additionally, DeePMD-kit v3 introduces support for the DPA-2 model, a novel architecture optimized for large atomic models. This release enhances plugin mechanisms, making integrating and developing new models easier.

Highlights

Multiple-backend framework: TensorFlow, PyTorch, and JAX support

image

DeePMD-kit v3 adds a versatile, pluggable framework providing consistent training and inference experience across multiple backends. Version 3.0.0 includes:

  • TensorFlow backend: Known for its computational efficiency with a static graph design.
  • PyTorch backend: A dynamic graph backend that simplifies model extension and development.
  • DP backend: Built with NumPy and Array API, a reference backend for development without heavy deep-learning frameworks.
  • JAX backend: Based on the DP backend via Array API, a static graph backend.
Features TensorFlow PyTorch JAX DP
Descriptor local frame
Descriptor se_e2_a
Descriptor se_e2_r
Descriptor se_e3
Descriptor se_e3_tebd
Descriptor DPA1
Descriptor DPA2
Descriptor Hybrid
Fitting energy
Fitting dipole
Fitting polar
Fitting DOS
Fitting property
ZBL
DPLR
DPRc
Spin
Gradient calculation
Model training
Model compression
Python inference
C++ inference

Critical features of the multiple-backend framework include the ability to:

  • Train models using different backends with the same training data and input script, allowing backend switching based on your efficiency or convenience needs.
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze
dp --tf compress

# Training a model using the PyTorch backend
dp --pt train input.json
dp --pt freeze
dp --pt compress
  • Convert models between backends using dp convert-backend, with backend-specific file extensions (e.g., .pb for TensorFlow and .pth for PyTorch).
# Convert from a TensorFlow model to a PyTorch model
dp convert-backend frozen_model.pb frozen_model.pth
# Convert from a PyTorch model to a TensorFlow model
dp convert-backend frozen_model.pth frozen_model.pb
# Convert from a PyTorch model to a JAX model
dp convert-backend frozen_model.pth frozen_model.savedmodel
# Convert from a PyTorch model to the backend-independent DP format
dp convert-backend frozen_model.pth frozen_model.dp
  • Run inference across backends via interfaces like dp test, Python/C++/C interfaces, or third-party packages (e.g., dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc.).
# In a LAMMPS file:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# run LAMMPS with a JAX backend model
pair_style deepmd frozen_model.savedmodel
# Calculate model deviation using different models
pair_style deepmd frozen_model.pb frozen_model.pth frozen_model.savedmodel out_file md.out out_freq 100
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

DPA-2 model: a large atomic model as a multi-task learner

The DPA-2 model offers a robust architecture for large atomic models (LAM), accurately representing diverse chemical systems for high-quality simulations. In this release, DPA-2 can be trained using the PyTorch backend, supporting both single-task (see examples/water/dpa2) or multi-task (see examples/water_multi_task/pytorch_example) training schemes. DPA-2 is available for Python/C++ inference in the JAX backend.

The DPA-2 descriptor comprises repinit and repformer, as shown below.

DPA-2

The PyTorch backend supports training strategies for large atomic models, including:

  • Parallel training: Train large atomic models on multiple GPUs for efficiency.
torchrun --nproc_per_node=4 --no-python dp --pt train input.json
  • Multi-task training: For large atomic models trained across a broad range of data calculated on different DFT levels with shared descriptors. An example is given in examples/water_multi_task/pytorch_example/input_torch.json.
  • Finetune: Training a pre-train large atomic model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Plugin mechanisms for external models

In version 3.0.0, the plugin capabilities have been implemented to support the development and integration of potential energy models using TensorFlow, PyTorch, or JAX backends, leveraging DeePMD-kit's trainer, loss functions, and interfaces. A plugin example is deepmd-gnn, which supports training the MACE and NequIP models in the DeePMD-kit with the familiar commands.

dp --pt train mace.json
dp --pt freeze
dp --pt test -m frozen_model.pth -s ../data/

image

Other new features

  • Descriptor se_e3_tebd. (#4066)
  • Fitting the property (#3867).
  • New training parameters: max_ckpt_keep (#3441), change_bias_after_training (#3993), and stat_file.
  • New command line interface: dp change-bias (#3993) and dp show (#3796).
  • Support generating JSON schema for integration with VSCode (#3849).
  • The latest LAMMPS version (stable_29Aug2024_update1) is supported. (#4088, #4179)

Breaking changes

  • The deepmodeling conda channel is deprecated. Use the conda-forge channel instead. (#3462, #4385)
  • The offline package and conda packages for CUDA 11 are dropped.
  • Python 3.7 and 3.8 supports are dropped. (#3185, #4185)
  • The minimal versions of deep learning frameworks: TensorFlow 2.7, PyTorch 2.1, JAX 0.4.33, and NumPy 1.21.
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Bias is removed by default from type embedding. (#3958)
  • The spin model is refactored, and its usage in the LAMMPS module has been changed. (#3301, #4321)
  • Multi-task training support is removed from the TensorFlow backend. (#3763)
  • The set_prefix key is deprecated. (#3753)
  • dp test now uses all sets for training and test. In previous versions, only the last set is used as the test set in dp test. (#3862)
  • The Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. (#3177, #3178)
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. (#3390)
  • C++ 11 support is dropped. (#4068)

For other changes, refer to Full Changelog: v2.2.11...v3.0.0rc0

Contributors

The PyTorch backend was developed in the dptech-corp/deepmd-pytorch repository, and then it was fully merged into the deepmd-kit repository in #3180. Contributors to the deepmd-pytorch repository:

Contributors to the deepmd-kit repository:

Read more

v3.0.0rc0

14 Nov 19:36
0ad4289
Compare
Choose a tag to compare
v3.0.0rc0 Pre-release
Pre-release

DeePMD-kit v3: Multiple-backend Framework, DPA-2 Large Atomic Model, and Plugin Mechanisms

We are excited to present the first release candidate of DeePMD-kit v3, an advanced version that enables deep potential models with TensorFlow, PyTorch, or JAX backends. Additionally, DeePMD-kit v3 introduces support for the DPA-2 model, a novel architecture optimized for large atomic models. This release enhances plugin mechanisms, making integrating and developing new models easier.

Highlights

Multiple-backend framework: TensorFlow, PyTorch, and JAX support

image

DeePMD-kit v3 adds a versatile, pluggable framework providing consistent training and inference experience across multiple backends. Version 3.0.0 includes:

  • TensorFlow backend: Known for its computational efficiency with a static graph design.
  • PyTorch backend: A dynamic graph backend that simplifies model extension and development.
  • DP backend: Built with NumPy and Array API, a reference backend for development without heavy deep-learning frameworks.
  • JAX backend: Based on the DP backend via Array API, a static graph backend.
Features TensorFlow PyTorch JAX DP
Descriptor local frame
Descriptor se_e2_a
Descriptor se_e2_r
Descriptor se_e3
Descriptor se_e3_tebd
Descriptor DPA1
Descriptor DPA2
Descriptor Hybrid
Fitting energy
Fitting dipole
Fitting polar
Fitting DOS
Fitting property
ZBL
DPLR
DPRc
Spin
Gradient calculation
Model training
Model compression
Python inference
C++ inference

Critical features of the multiple-backend framework include the ability to:

  • Train models using different backends with the same training data and input script, allowing backend switching based on your efficiency or convenience needs.
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze
dp --tf compress

# Training a model using the PyTorch backend
dp --pt train input.json
dp --pt freeze
dp --pt compress
  • Convert models between backends using dp convert-backend, with backend-specific file extensions (e.g., .pb for TensorFlow and .pth for PyTorch).
# Convert from a TensorFlow model to a PyTorch model
dp convert-backend frozen_model.pb frozen_model.pth
# Convert from a PyTorch model to a TensorFlow model
dp convert-backend frozen_model.pth frozen_model.pb
# Convert from a PyTorch model to a JAX model
dp convert-backend frozen_model.pth frozen_model.savedmodel
# Convert from a PyTorch model to the backend-independent DP format
dp convert-backend frozen_model.pth frozen_model.dp
  • Run inference across backends via interfaces like dp test, Python/C++/C interfaces, or third-party packages (e.g., dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc.).
# In a LAMMPS file:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# run LAMMPS with a JAX backend model
pair_style deepmd frozen_model.savedmodel
# Calculate model deviation using different models
pair_style deepmd frozen_model.pb frozen_model.pth frozen_model.savedmodel out_file md.out out_freq 100
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

DPA-2 model: Towards a universal large atomic model for molecular and material simulation

The DPA-2 model offers a robust architecture for large atomic models (LAM), accurately representing diverse chemical systems for high-quality simulations. In this release, DPA-2 is trainable in the PyTorch backend, with an example configuration available in examples/water/dpa2. DPA-2 is available for Python inference in the JAX backend.

The DPA-2 descriptor comprises repinit and repformer, as shown below.

DPA-2

The PyTorch backend supports training strategies for large atomic models, including:

  • Parallel training: Train large atomic models on multiple GPUs for efficiency.
torchrun --nproc_per_node=4 --no-python dp --pt train input.json
  • Multi-task training: For large atomic models trained across a broad range of data calculated on different DFT levels with shared descriptors. An example is given in examples/water_multi_task/pytorch_example/input_torch.json.
  • Finetune: Training a pre-train large atomic model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Plugin mechanisms for external models

In v3.0.0, plugin capabilities allow you to develop models with TensorFlow, PyTorch, or JAX, leveraging DeePMD-kit's trainer, loss functions, and interfaces. A plugin example is deepmd-gnn, which supports training the MACE and NequIP models in the DeePMD-kit with the familiar commands.

dp --pt train mace.json
dp --pt freeze
dp --pt test -m frozen_model.pth -s ../data/

image

Other new features

  • Descriptor se_e3_tebd. (#4066)
  • Fitting the property (#3867).
  • New training parameters: max_ckpt_keep (#3441), change_bias_after_training (#3993), and stat_file.
  • New command line interface: dp change-bias (#3993) and dp show (#3796).
  • Support generating JSON schema for integration with VSCode (#3849).
  • The latest LAMMPS version (stable_29Aug2024_update1) is supported. (#4088, #4179)

Breaking changes

  • Python 3.7 and 3.8 supports are dropped. (#3185, #4185)
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Bias is removed by default from type embedding. (#3958)
  • The spin model is refactored, and its usage in the LAMMPS module has been changed. (#3301, #4321)
  • Multi-task training support is removed from the TensorFlow backend. (#3763)
  • The set_prefix key is deprecated. (#3753)
  • dp test now uses all sets for training and test. In previous versions, only the last set is used as the test set in dp test. (#3862)
  • The Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. (#3177, #3178)
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. (#3390)
  • C++ 11 support is dropped. (#4068)

For other changes, refer to Full Changelog: v2.2.11...v3.0.0rc0

Contributors

The PyTorch backend was developed in the dptech-corp/deepmd-pytorch repository, and then it was fully merged into the deepmd-kit repository in #3180. Contributors to the deepmd-pytorch repository:

Contributors to the deepmd-kit repository:

Read more

v3.0.0b4

25 Sep 16:01
0b3f860
Compare
Choose a tag to compare
v3.0.0b4 Pre-release
Pre-release

What's Changed

Breaking changes

  • breaking: drop C++ 11 by @njzjz in #4068
  • breaking(pt/dp): tune new sub-structures for DPA2 by @iProzd in #4089
    The default values of new options g1_out_conv and g1_out_mlp are set to True. The behaviors in previous versions are False.

New features

Enhancement

  • fix: bump LAMMPS to stable_29Aug2024 by @njzjz in #4088
  • chore(pt): cleanup deadcode by @wanghan-iapcm in #4142
  • chore(pt): make comm_dict for dpa2 noncompulsory when nghost is 0 by @njzjz in #4144
  • Set ROCM_ROOT to ROCM_PATH when it exist by @sigbjobo in #4150
  • chore(pt): move deepmd.pt.infer.deep_eval.eval_model to tests by @njzjz in #4153

Documentation

  • docs: improve docs for environment variables by @njzjz in #4070
  • docs: dynamically generate command outputs by @njzjz in #4071
  • docs: improve error message for inconsistent type maps by @njzjz in #4074
  • docs: add multiple packages to intersphinx_mapping by @njzjz in #4075
  • docs: document CMake variables using Sphinx styles by @njzjz in #4079
  • docs: update ipi installation command by @njzjz in #4081
  • docs: fix the default value of DP_ENABLE_PYTORCH by @njzjz in #4083
  • docs: fix defination of se_e3 by @njzjz in #4113
  • docs: update DeepModeling URLs by @njzjz-bot in #4119
  • docs(pt): examples for new dpa2 model by @iProzd in #4138

Bugfix

  • fix: fix PT AutoBatchSize OOM bug and merge execute_all into base by @njzjz in #4047
  • fix: replace datetime.datetime.utcnow which is deprecated by @njzjz in #4067
  • fix:fix LAMMPS MPI tests with mpi4py 4.0.0 by @njzjz in #4032
  • fix(pt): invalid type_map when multitask training by @Cloudac7 in #4031
  • fix: manage testing models in a standard way by @njzjz in #4028
  • fix(pt): fix ValueError when array byte order is not native by @njzjz in #4100
  • fix(pt): convert torch.__version__ to str when serializing by @njzjz in #4106
  • fix(tests): fix skip_dp by @njzjz in #4111
  • [Fix] Wrap log_path with Path by @HydrogenSulfate in #4117
  • fix: bugs in uts for property fit by @Chengqian-Zhang in #4120
  • fix: type of the preset out bias by @wanghan-iapcm in #4135
  • fix(pt): fix zero inputs for LayerNorm by @njzjz in #4134
  • fix(pt/dp): share params of repinit_three_body by @iProzd in #4139
  • fix(pt): move entry point from deepmd.pt.model to deepmd.pt by @njzjz in #4146
  • fix: fix DPH5Path.glob for new keys by @njzjz in #4152
  • fix(pt): make state_dict safe for weights_only by @iProzd in #4148
  • fix(pt): fix compute_output_stats_global when atomic_output is None by @njzjz in #4155
  • fix(pt ut): make separated uts deterministic by @iProzd in #4162
  • fix(pt): finetuning property/dipole/polar/dos fitting with multi-dimensional data causes error by @Chengqian-Zhang in #4145

Dependency updates

  • chore(deps): bump scikit-build-core to 0.9.x by @njzjz in #4038
  • build(deps): bump pypa/cibuildwheel from 2.19 to 2.20 by @dependabot in #4045
  • build(deps): bump pypa/cibuildwheel from 2.20 to 2.21 by @dependabot in #4127

CI/CD

  • ci: add include-hidden-files to actions/upload-artifact by @njzjz in #4095
  • ci: test Python 3.12 by @njzjz in #4059
  • CI(codecov): do not notify until all reports are ready by @njzjz in #4136

Full Changelog: v3.0.0b3...v3.0.0b4

v3.0.0b3

27 Jul 04:25
0e0fc1a
Compare
Choose a tag to compare
v3.0.0b3 Pre-release
Pre-release

What's Changed

Other Changes

Full Changelog: v3.0.0b2...v3.0.0b3

v3.0.0b2

26 Jul 18:33
7f61048
Compare
Choose a tag to compare
v3.0.0b2 Pre-release
Pre-release

What's Changed

New features

  • feat: add documentation and options for multi-task arguments by @njzjz in #3989
  • feat: plain text model format by @njzjz in #4025
  • feat: allow model arguments to be registered outside by @njzjz in #3995
  • feat: add get_model classmethod to BaseModel by @njzjz in #4002

Enhancement

Documentation

Bugfixes

  • fix(cmake): fix set_if_higher by @njzjz in #3977
  • fix(pt): ensure suffix of --init_model and --restart is .pt by @njzjz in #3980
  • fix(pt): do not overwrite disp_file when restarting training by @njzjz in #3985
  • fix(cc): compile select_map<int> when TensorFlow backend is off by @njzjz in #3987
  • fix(pt): make 'find_' to be float in get data by @iProzd in #3992
  • fix float precision problem of se_atten in line 217 (#3961) by @LiuGroupHNU in #3978
  • fix: fix errors for zero atom inputs by @njzjz in #4005
  • fix(pt): optimize graph memory usage by @iProzd in #4006
  • fix(pt): fix lammps nlist sort with large sel by @iProzd in #3993
  • fix(cc): add atomic argument to DeepPotBase::computew by @njzjz in #3996
  • fix(lmp): call model deviation interface without atomic properties when they are not requested by @njzjz in #4012
  • fix(c): call C++ interface without atomic properties when they are not requested by @njzjz in #4010
  • fix(pt): fix get_dim for DescrptDPA1Compat by @iProzd in #4007
  • fix(cc): fix message passing when nloc is 0 by @njzjz in #4021
  • fix(pt): use user seed in DpLoaderSet by @iProzd in #4015

Code style

CI/CD

  • ci: pin PT to 2.3.1 when using CUDA by @njzjz in #4009

Full Changelog: v3.0.0b1...v3.0.0b2

v3.0.0b1

14 Jul 07:11
ad96750
Compare
Choose a tag to compare
v3.0.0b1 Pre-release
Pre-release

What's Changed

Breaking Changes

  • breaking(pt/tf/dp): disable bias in type embedding by @iProzd in #3958
    This change may make PyTorch checkpoints generated by v3.0.0b0 cannot be used in v3.0.0b1.

New features

  • feat: add plugin entry point for PT by @njzjz in #3965
  • feat(tf): improve the activation setting in tebd by @iProzd in #3971

Bugfix

CI/CD

Full Changelog: v3.0.0b0...v3.0.0b1

v3.0.0b0

03 Jul 19:22
29db791
Compare
Choose a tag to compare
v3.0.0b0 Pre-release
Pre-release

What's Changed

Compared to v3.0.0a0, v3.0.0b0 contains all changes in v2.2.10 and v2.2.11, as well as:

Breaking changes

  • breaking: remove multi-task support in tf by @iProzd in #3763
  • breaking: deprecate set_prefix by @njzjz in #3753
  • breaking: use all sets for training and test by @njzjz in #3862. In previous versions, only the last set is used as the test set in dp test.
  • PyTorch models trained in v3.0.0a0 cannot be used in v3.0.0b0 due to several changes. As mentioned in the release note of v3.0.0a0, we didn't promise backward compatibility for v3.0.0a0.
  • The DPA-2 configurations have been changed by @iProzd in #3768. The old format in v3.0.0a0 is no longer supported.

Major new features

  • Latest supported features in the PyTorch and DP backend, which are consistent with the TensorFlow backend if possible:
    • Descriptor: se_e2_a, se_e2_r, se_e3, se_atten, se_atten_v2, dpa2, hybrid;
    • Fitting: energy, dipole, polar, dos, fparam/apram support
    • Model: standard, DPRc, frozen, ZBL, Spin
    • Python inference interface
    • PyTorch only: C++ inference interface for energy only
    • PyTorch only: TensorBoard
  • Support using the DPA-2 model in the LAMMPS by @CaRoLZhangxy in #3657. If you install the Python interface from the source, you must set the environment variable DP_ENABLE_PYTORCH=1 to build the PyTorch customized OPs.
  • New command line options dp show by @Chengqian-Zhang in #3796 and dp change-bias by @iProzd in #3933.
  • New training options max_ckpt_keep by @iProzd in #3441 and change_bias_after_training by @iProzd in #3933. Several training options now take effect in the PyTorch backend, such as seed by @iProzd in #3773, disp_training and time_training by @iProzd in #3775, and profiling by @njzjz in #3897.
  • Performance improvement of the PyTorch backend by @njzjz in #3422, #3424, #3425 and by @iProzd in #3826
  • Support generating JSON schema for integration with VSCode by @njzjz in #3849

Minor enhancements and code refactoring are listed at v3.0.0a0...v3.0.0b0.

Contributors

New Contributors

Full Changelog: v3.0.0a0...v3.0.0b0

For discussion of v3, please go to #3401

v2.2.11

03 Jul 19:22
84ca63c
Compare
Choose a tag to compare

What's Changed

New feature

  • feat: apply descriptor exclude_types to env mat stat by @njzjz in #3625
  • feat(build): Add Git archives version files by @njzjz-bot in #3669

Enhancement

  • style: enable W rules by @njzjz in #3793
  • build: unpin tensorflow version on windows by @njzjz in #3721
  • Add a reminder for the illegal memory error by @Yi-FanLi in #3822
  • lmp: improve error message when compute/fix is not found by @njzjz in #3801

Bugfix

  • tf: remove freeze warning for optional nodes by @njzjz in #3381
  • fix: set rpath for protobuf by @njzjz in #3636
  • fix(tf): apply exclude types to se_atten_v2 switch by @njzjz in #3651
  • fix: fix git version detection in docker_package_c.sh by @njzjz in #3658
  • fix(tf): fix float32 for exclude_types in se_atten_v2 by @njzjz in #3682
  • Fix typo in smooth_type_embdding by @iProzd in #3698
  • test: set more lossy precision requirements by @nahso in #3726
  • fix: fix ipi package by @njzjz in #3835
  • fix(tf): prevent fitting_attr variable scope from becoming fitting_attr_1 by @njzjz in #3930
  • fix seeds in se_a and se_atten by @njzjz in #3880

Documentation

CI/CD

  • CI: Accerate GitHub Actions using uv by @njzjz in #3676
  • ci: bump ase to 3.23.0 by @njzjz in #3846
  • ci(build): use uv for cibuildwheel by @njzjz in #3695
  • chore(ci): workaround to retry error decoding response body from uv by @njzjz in #3889

Dependency updates

  • build(deps): bump tar from 6.1.14 to 6.2.1 in /source/nodejs by @dependabot in #3714
  • build(deps): bump pypa/cibuildwheel from 2.17 to 2.18 by @dependabot in #3777
  • build(deps): bump docker/build-push-action from 5 to 6 by @dependabot in #3882

Full Changelog: v2.2.10...v2.2.11

v2.2.10

06 Apr 19:28
Compare
Choose a tag to compare

What's Changed

New features

Enhancement

  • Neighbor stat is 80x accelerated by @njzjz in #3275
  • support checkpoint path (instead of directory) in dp freeze by @njzjz in #3254
  • add fparam/aparam support for finetune by @njzjz in #3313
  • chore(build): move static part of dynamic metadata to pyproject.toml by @njzjz in #3618
  • test: add LAMMPS MPI tests by @njzjz in #3572
  • support Python 3.12 by @njzjz in #3343

Documentation

  • docs: rewrite README; deprecate manually written TOC by @njzjz in #3179
  • docs: apply type_one_side=True to se_a and se_r by @njzjz in #3364
  • docs: add deprecation notice for the official conda channel and more conda docs by @njzjz in #3462
  • docs: Replace quick_start.ipynb with a new version. by @Mancn-Xu in #3567
  • issue template: change TF version to backend version by @njzjz in #3244
  • chore: remove incorrect memset TODOs by @njzjz in #3600

Bugfix

  • c: change the required shape of electric field to nloc * 3 by @njzjz in #3237
  • Fix LAMMPS plugin symlink path on macOS platform by @chazeon in #3473
  • fix_dplr.cpp delete redundant setup by @shiruosong in #3344
  • fix_dplr.cpp set atom->image when pre_force by @shiruosong in #3345
  • fix: fix type hint of sel by @njzjz in #3624
  • fix: make se_atten_v2 masking smooth when davg is not zero by @njzjz in #3632
  • fix: do not install tf-keras for cu11 by @njzjz in #3444

CI/CD

Dependency update

  • bump LAMMPS to stable_2Aug2023_update3 by @njzjz in #3399
  • build(deps): bump codecov/codecov-action from 3 to 4 by @dependabot in #3231
  • build(deps): bump pypa/cibuildwheel from 2.16 to 2.17 by @dependabot in #3487
  • pin nvidia-cudnn-cu{11,12} to <9 by @njzjz in #3610
  • pin docker actions to major versions by @njzjz in #3238
  • build(deps): bump the npm_and_yarn group across 1 directories with 1 update by @dependabot in #3312
  • bump scikit-build-core to 0.8 by @njzjz in #3369
  • build(deps): bump softprops/action-gh-release from 1 to 2 by @dependabot in #3446

New Contributors

Full Changelog: v2.2.9...v2.2.10

v3.0.0a0

03 Mar 09:22
ec32340
Compare
Choose a tag to compare
v3.0.0a0 Pre-release
Pre-release

DeePMD-kit v3: A multiple-backend framework for deep potentials

We are excited to announce the first alpha version of DeePMD-kit v3. DeePMD-kit v3 allows you to train and run deep potential models on top of TensorFlow or PyTorch. DeePMD-kit v3 also supports the DPA-2 model, a novel architecture for large atomic models.

Highlights

Multiple-backend framework

image

DeePMD-kit v3 adds a pluggable multiple-backend framework to provide consistent training and inference experiences between different backends. You can:

  • Use the same training data and the input script to train a deep potential model with different backends. Switch backends based on efficiency, functionality, or convenience:
# Training a model using the TensorFlow backend
dp --tf train input.json
dp --tf freeze

# Training a mode using the PyTorch backend
dp --pt train input.json
dp --pt freeze
  • Use any model to perform inference via any existing interfaces, including dp test, Python/C++/C interface, and third-party packages (dpdata, ASE, LAMMPS, AMBER, Gromacs, i-PI, CP2K, OpenMM, ABACUS, etc). Take an example on LAMMPS:
# run LAMMPS with a TensorFlow backend model
pair_style deepmd frozen_model.pb
# run LAMMPS with a PyTorch backend model
pair_style deepmd frozen_model.pth
# Calculate model deviation using both models
pair_style deepmd frozen_model.pb frozen_model.pth out_file md.out out_freq 100
  • Convert models between backends, using dp convert-backend, if both backends support a model:
dp convert-backend frozen_model.pb frozen_model.pth
dp convert-backend frozen_model.pth frozen_model.pb
  • Add a new backend to DeePMD-kit much more quickly if you want to contribute to DeePMD-kit.

PyTorch backend: a backend designed for large atomic models and new research

We added the PyTorch backend in DeePMD-kit v3 to support the development of new models, especially for large atomic models.

DPA-2 model: Towards a universal large atomic model for molecular and material simulation

DPA-2 model is a novel architecture for Large Atomic Model (LAM) and can accurately represent a diverse range of chemical systems and materials, enabling high-quality simulations and predictions with significantly reduced efforts compared to traditional methods. The DPA-2 model is only implemented in the PyTorch backend. An example configuration is in the examples/water/dpa2 directory.

The DPA-2 descriptor includes two primary components: repinit and repformer. The detailed architecture is shown in the following figure.

DPA-2

Training strategies for large atomic models

The PyTorch backend has supported multiple training strategies to develop large atomic models.

Parallel training: Large atomic models have a number of hyper-parameters and complex architecture, so training a model on multiple GPUs is necessary. Benefiting from the PyTorch community ecosystem, the parallel training for the PyTorch backend can be driven by torchrun, a launcher for distributed data parallel.

torchrun --nproc_per_node=4 --no-python dp --pt train input.json

Multi-task training: Large atomic models are trained against data in a wide scope and at different DFT levels, which requires multi-task training. The PyTorch backend supports multi-task training, sharing the descriptor between different An example is given in examples/water_multi_task/pytorch_example/input_torch.json.

Finetune: Fine-tune is useful to train a pre-train large model on a smaller, task-specific dataset. The PyTorch backend has supported --finetune argument in the dp --pt train command line.

Developing new models using Python and dynamic graphs

Researchers may feel pain about the static graph and the custom C++ OPs from the TensorFlow backend, which sacrifices research convenience for computational performance. The PyTorch backend has a well-designed code structure written using the dynamic graph, which is currently 100% written with the Python language, making extending and debugging new deep potential models easier than the static graph.

Supporting traditional deep potential models

People may still want to use the traditional models already supported by the TensorFlow backend in the PyTorch backend and compare the same model among different backends. We almost rewrote all of the traditional models in the PyTorch backend, which are listed below:

  • Features supported:
    • Descriptor: se_e2_a, se_e2_r, se_atten, hybrid;
    • Fitting: energy, dipole, polar, fparam/apram support
    • Model: standard, DPRc
    • Python inference interface
    • C++ inference interface for energy only
    • TensorBoard
  • Features not supported yet:
    • Descriptor: se_e3, se_atten_v2, se_e2_a_mask
    • Fitting: dos
    • Model: linear_ener, DPLR, pairtab, linear_ener, frozen, pairwise_dprc, ZBL, Spin
    • Model compression
    • Python inference interface for DPLR
    • C++ inference interface for tensors and DPLR
    • Paralleling training using Horovod
  • Features not planned:
    • Descriptor: loc_frame, se_e2_a + type embedding, se_a_ebd_v2
    • NVNMD

Warning

As part of an alpha release, the PyTorch backend's API or user input arguments may change before the first stable version.

DP backend and format: reference backend for other backends

DP is a reference backend for development that uses pure NumPy to implement models without using any heavy deep-learning frameworks. It cannot be used for training but only for Python inference. As a reference backend, it is not aimed at the best performance but only the correct results. The DP backend uses HDF5 to store model serialization data, which is backend-independent.
The DP backend and the serialization data are used in the unit test to ensure different backends have consistent results and can be converted between each other.
In the current version, the DP backend has a similar supporting status to the PyTorch backend, while DPA-1 and DPA-2 are not supported yet.

Authors

The above highlights were mainly contributed by

Breaking changes

  • Python 3.7 support is dropped. by @njzjz in #3185
  • We require all model files to have the correct filename extension for all interfaces so a corresponding backend can load them. TensorFlow model files must end with .pb extension.
  • Python class DeepTensor (including DeepDiople and DeepPolar) now returns atomic tensor in the dimension of natoms instead of nsel_atoms. by @njzjz in #3390
  • For developers: the Python module structure is fully refactored. The old deepmd module was moved to deepmd.tf without other API changes, and deepmd_utils was moved to deepmd without other API changes. by @njzjz in #3177, #3178

Other changes

Enhancement

  • Neighbor stat for the TensorFlow backend is 80x accelerated. by @njzjz in #3275
  • i-PI: remove normalize_coord by @njzjz in #3257
  • LAMMPS: fix_dplr.cpp delete redundant setup and set atom->image when pre_force by @shiruosong in #3344, #3345
  • Bump scikit-build-core to 0.8 by @njzjz in #3369
  • Bump LAMMPS to stable_2Aug2023_update3 by @njzjz in #3399
  • Add fparam/aparam support for fine-tune by @njzjz in #3313
  • TF: remove freeze warning for optional nodes by @njzjz in #3381

CI/CD

Bugfix

  • Fix TF 2.16 compatibility by @njzjz in #3343
  • Detect version in advance before building deepmd-kit-cu11 by @njzjz in #3172
  • C API: change the required shape of electric field to nloc * 3 by @njzjz in #3237

New Contributors

Full Changelog: https://github.com/deepmodeling/de...

Read more