Skip to content

Commit

Permalink
Merge branch 'main' of github.com:Quantum-Accelerators/quacc
Browse files Browse the repository at this point in the history
  • Loading branch information
Andrew-S-Rosen committed Dec 10, 2023
2 parents 39cdfdb + 41c0575 commit 7561064
Show file tree
Hide file tree
Showing 4 changed files with 12 additions and 18 deletions.
4 changes: 2 additions & 2 deletions docs/install/codes.md
Original file line number Diff line number Diff line change
Expand Up @@ -71,9 +71,9 @@ If you plan to use Q-Chem with Quacc, you will need to install `openbabel`. This
conda install -c conda-forge openbabel
```

## tblite
## TBLite

If you plan to use tblite with quacc, you will need to install the tblite interface with ASE support.
If you plan to use TBLite with quacc, you will need to install the tblite interface with ASE support.

```bash
pip install quacc[tblite] # only on Linux
Expand Down
2 changes: 1 addition & 1 deletion docs/user/basics/wflow_overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ Everyone's computing needs are different, so we ensured that quacc is interopera
- Extremely popular
- Has significant support for running on HPC resources
- It does not involve a centralized server or network connectivity
- Supports the pilot job model and advanced queuing schemes via Dask Jobqueue
- Supports adaptive scaling of compute resources

Cons:

Expand Down
22 changes: 8 additions & 14 deletions docs/user/wflow_engine/executors2.md
Original file line number Diff line number Diff line change
Expand Up @@ -258,16 +258,16 @@ When deploying calculations for the first time, it's important to start simple,

=== "Dask"

From an interactive resource like a Jupyter Notebook or IPython kernel on the login node of the remote machine, run the following to instantiate a Dask `SLURMCluster`:
From an interactive resource like a Jupyter Notebook or IPython kernel on the login node of the remote machine, run the following to instantiate a Dask [`SLURMCluster`](https://jobqueue.dask.org/en/latest/generated/dask_jobqueue.SLURMCluster.html):

```python
from dask.distributed import Client
from dask_jobqueue import SLURMCluster

n_slurm_jobs = 1 # (1)!
n_nodes_per_calc = 1 # (2)!
n_cores_per_node = 48 # (3)!
mem_per_node = "64 GB" # (4)!
n_slurm_jobs = 1
n_nodes_per_calc = 1
n_cores_per_node = 48
mem_per_node = "64 GB"

cluster_kwargs = {
# Dask worker options
Expand All @@ -276,7 +276,7 @@ When deploying calculations for the first time, it's important to start simple,
"memory": mem_per_node,
# SLURM options
"shebang": "#!/bin/bash",
"account": "MyAccountName", # (5)!
"account": "MyAccountName", # (1)!
"walltime": "00:10:00",
"job_mem": "0",
"job_script_prologue": [
Expand All @@ -292,15 +292,9 @@ When deploying calculations for the first time, it's important to start simple,
client = Client(cluster)
```

1. Number of Slurm jobs to launch in parallel.

2. Number of nodes to reserve for each Slurm job.

3. Number of CPU cores per node.

4. Total memory per node.
1. Make sure to replace this with the account name to charge.

5. Make sure to replace this with the account name to charge.
Then run the following code:

```python
from ase.build import bulk
Expand Down
2 changes: 1 addition & 1 deletion pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -53,7 +53,7 @@ phonons = ["phonopy>=2.20.0"]
prefect = ["prefect>=2.13.1", "prefect-dask>=0.2.4", "dask-jobqueue>=0.8.2"]
redun = ["redun>=0.16.2"]
tblite = ["tblite[ase]>=0.3.0; platform_system=='Linux'"]
dev = ["black>=23.7.0", "codecov-cli>=0.4.1", "docformatter>=1.7.5", "isort>=5.12.0", "pytest>=7.4.0", "pytest-cov>=3.0.0", "ruff>=0.0.285"]
dev = ["black>=23.7.0", "codecov-cli>=0.4.1", "isort>=5.12.0", "pytest>=7.4.0", "pytest-cov>=3.0.0", "ruff>=0.0.285"]
docs = [
"blacken-docs>=1.16.0",
"mkdocs-material>=9.1.21",
Expand Down

0 comments on commit 7561064

Please sign in to comment.