Skip to content

Commit

Permalink
Fixing vale
Browse files Browse the repository at this point in the history
  • Loading branch information
germa89 committed Apr 4, 2024
1 parent 417581e commit f87c2df
Show file tree
Hide file tree
Showing 3 changed files with 28 additions and 22 deletions.
4 changes: 2 additions & 2 deletions doc/source/user_guide/hpc.rst
Original file line number Diff line number Diff line change
Expand Up @@ -8,8 +8,8 @@ High performance clusters (HPC)

In this page, an overview on how to use PyMAPDL in HPC cluster is presented.
At the moment, only SLURM scheduler is considered.
However, many of the assumptions for this scheduler might apply to other schedulers like PBS, SGE, LSF, ...

However, many of the assumptions for this scheduler might apply to other schedulers
like PBS, SGE, or LSF.


.. include:: hpc_slurm.rst
Expand Down
35 changes: 18 additions & 17 deletions doc/source/user_guide/hpc_slurm.rst
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ SLURM on HPC clusters.
What is SLURM?
==============

SLURM is an open-source workload manager and job scheduler designed for Linux
SLURM is an open source workload manager and job scheduler designed for Linux
clusters of all sizes. It efficiently allocates resources (compute nodes, CPU
cores, memory, GPUs) to jobs submitted by users.

Expand All @@ -29,7 +29,7 @@ Basic concepts
- **Compute node**: A type of node used only for running processes. It is not accessible from outside the cluster.
- **Login nodes**: A type of node which is used only for login and job submission. No computation should be performed on it. It is sometimes referred to as 'virtual desktop infrastructure' (VDI).
- **Partition**: A logical grouping of nodes with similar characteristics
(e.g., CPU architecture, memory size).
(for example CPU architecture, memory size).
- **Job**: A task submitted to SLURM for execution.
- **Queue**: A waiting area where jobs are held until resources become available.
- **Scheduler**: The component responsible for deciding which job gets executed
Expand Down Expand Up @@ -60,7 +60,7 @@ job parameters and commands to execute. Here's a basic example:

**my_script.sh**

.. code:: bash
.. code-block:: bash
#!/bin/bash
#SBATCH --job-name=myjob
Expand Down Expand Up @@ -115,14 +115,14 @@ Install PyMAPDL

PyMAPDL Python package (``ansys-mapdl-core``) needs to be installed in a virtual environment which is accessible from the compute nodes.

To do that you can find where your python distribution is installed using:
To do that you can find where your Python distribution is installed using:

.. code-block:: console
user@machine:~$ which python3
/usr/bin/python3
You can check which version of Python you have by doing:
You can print the version of Python you have available by doing:

.. code-block:: console
Expand Down Expand Up @@ -171,7 +171,7 @@ Then you can install PyMAPDL after activating the virtual environment:
You can test if this virtual environment is accessible from the compute nodes by
running the following bash script ``test.sh``:

.. code:: bash
.. code-block:: bash
#!/bin/bash
#SBATCH --job-name=myjob
Expand All @@ -194,7 +194,7 @@ This command might take around 1-2 minutes to complete depending on the amount o
resources available in the cluster.
The console output should show:

.. code:: text
.. code-block:: text
Testing Python!
PyMAPDL version 0.68.1 was successfully imported!
Expand Down Expand Up @@ -234,7 +234,7 @@ a compute node using:
Many HPC infrastructure uses environment managers to load and unload software package using modules
and environment variables.
You should check that the correct module is loaded in your script.
Hence you might want to make sure that the correct module is loaded in your script.
Two of the most common environment managers are `Environment modules - Modules documentation <modules_docs_>`_ and `Lmod documentation <lmod_docs_>`_.
Check your cluster documentation to know which environment manager is using, and how to load Python with
it. If you find any issue, you should contact your cluster administrator.
Expand All @@ -248,7 +248,7 @@ and call the Python script.

**Python script:** ``pymapdl_script.py``

.. code:: python
.. code-block:: python
from ansys.mapdl.core import launch_mapdl
Expand All @@ -265,7 +265,7 @@ and call the Python script.
**Bash script:** ``job.sh``

.. code:: bash
.. code-block:: bash
source /home/user/.venv/bin/activate
python pymapdl_script.py
Expand All @@ -290,7 +290,8 @@ and you pass all the environment variables to the job:
The ``--export=ALL`` might not be needed, depending on the cluster configuration.
Furthermore, you can omit the ``python`` call in the above command, if there is the Python sheabag (``#!/usr/bin/python3``) in the ``pymapdl_script.py`` script first line.
Furthermore, you can omit the ``python`` call in the preceding command, if there is
the Python shebang (``#!/usr/bin/python3``) in the ``pymapdl_script.py`` script first line.

.. code-block:: console
Expand All @@ -306,7 +307,7 @@ instead of ``srun``, but in that case, the bash file is needed:
The expected output of the job should be:

.. code:: text
.. code-block:: text
Number of CPUs: 10.0
Expand Down Expand Up @@ -392,7 +393,7 @@ It's a versatile tool for managing jobs, nodes, partitions, and more.
**Common Options:**

- ``--name=jobname``: Cancels all jobs with a specific name.
- ``--state=pending``: Cancels all jobs in a specific state, e.g., pending jobs.
- ``--state=pending``: Cancels all jobs in a specific state, for example, pending jobs.

``sacct`` - Accounting Information
----------------------------------
Expand Down Expand Up @@ -420,10 +421,10 @@ It's a versatile tool for managing jobs, nodes, partitions, and more.
**Common Options:**

- ``--format``: Specifies which fields to display, e.g., ``--format=JobID,JobName,State``.
- ``-S`` and ``-E``: Set the start and end time for the report, e.g., ``-S 2023-01-01 -E 2023-01-31``.
- ``--format``: Specifies which fields to display, for example, ``--format=JobID,JobName,State``.
- ``-S`` and ``-E``: Set the start and end time for the report, for example, ``-S 2023-01-01 -E 2023-01-31``.

For more detailed information, refer to the official SLURM documentation or use the `man` command (e.g., `man squeue`) to explore all available options and their usage.
For more detailed information, refer to the official SLURM documentation or use the `man` command (for example, `man squeue`) to explore all available options and their usage.


Best Practices
Expand Down Expand Up @@ -484,7 +485,7 @@ resources such as number of nodes, CPU cores, memory, and time limit.
Requesting Resources
~~~~~~~~~~~~~~~~~~~~
Use the `--constraint` flag to request specific hardware
configurations (e.g., CPU architecture) or the `--gres` flag for requesting generic
configurations (for example, CPU architecture) or the `--gres` flag for requesting generic
resources like GPUs.

Resource Limits
Expand Down
11 changes: 8 additions & 3 deletions doc/styles/Vocab/ANSYS/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,7 @@ FSW
GitHub
Gmsh
GPa
GPUs
GUI
hexahedral
hostname
Expand Down Expand Up @@ -128,18 +129,23 @@ PyAnsys
PyAnsys Math
PyDPF-Core
PyDPF-Post
PyMAPDL
pymapdl
PyMAPDL
Python
QRDAMP
Radiosity
Rao
righ
RNNN
sbatch
singl
Slurm
SLURM
smal
sord
spotweld
squeue
srun
struc
subselected
substep
Expand Down Expand Up @@ -170,5 +176,4 @@ Windows Subsystem
Windows Subsystem for Linux
wsl
WSL
Zhu
SLURM
Zhu

0 comments on commit f87c2df

Please sign in to comment.