Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rename references to master branch #242

Open
wants to merge 1 commit into
base: gh-pages
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/Announcements/2022-04-06-announcement.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ brief: FY23 Allocations, Documentation, Eagle Login Nodes, CSC Tutorial
The Eagle allocation process for FY23 is scheduled to open up on May 11, with applications due June 8. The application process will be an update of the process used in FY23, with additional information requested to help manage the transition from Eagle to Kestrel. HPC Operations will host a webinar on May 17 to explain the application process. Watch for announcements.

# Documentation
We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/master/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages.
We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/code-examples/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages.

# Eagle login node etiquette
Eagle logins are shared resources that are heavily utilized. We have some controls in place to limit per user process use of memory and CPU that will ramp down your processes usage over time. We recommend any sustained heavy usage of memory and CPU take place on compute nodes, where these limits aren't in place. If you only need a node for an hour, nodes in the debug partition are available. We permit compiles and file operations on the logins, but discourage multi-threaded operations or long, sustained operations against the file system. We cannot put the same limits on file system operations as memory and CPU, therefore if you slow the file system on the login node, you slow it for everyone on that login. Lastly, Fastx, the remote windowing package on the ED nodes, is a licensed product. When you are done using FastX, please log all the way out to ensure licenses are available for all users.
Expand Down
2 changes: 1 addition & 1 deletion docs/Announcements/2022-05-04-announcement.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,4 +41,4 @@ The configuration file will also apply to command-line ssh in Windows, as well.
The Lustre file systems that hosts /projects, /scratch, /shared-projects and /datasets works most efficiently when it is under 80% full. Please do your part to keep the file system under 80% by cleaning up your /projects, /scratch and /shared-projects spaces.

# Documentation
We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/master/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages.
We would like to announce our user-contributed [documentation repository](https://github.com/NREL/HPC) and [website](https://nrel.github.io/HPC/) for Eagle and other NREL HPC systems that is open to both NREL and non-NREL users. This repository serves as a collection of code examples, executables, and utilities to benefit the NREL HPC community. It also hosts a site that provides more verbose documentation and examples. If you would like to contribute or recommend a topic to be covered please open an issue or a pull request in the repository. Our [contribution guidelines](https://github.com/NREL/HPC/blob/code-examples/CONTRIBUTING.md) offer more detailed instructions on how to add content to the pages.
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,7 @@ If everything works correctly, you will see an output similar to:

RL algorithms are notorious for the amount of data they need to collect in order to learn policies. The more data collected, the better the training will (usually) be. The best way to do it is to run many Gym instances in parallel and collecting experience, and this is where RLlib assists.

[RLlib](https://docs.ray.io/en/master/rllib/index.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one.
[RLlib](https://docs.ray.io/en/code-examples/rllib/index.html) is an open-source library for reinforcement learning that offers both high scalability and a unified API for a variety of applications. It supports all known deep learning frameworks such as Tensorflow, Pytorch, although most parts are framework-agnostic and can be used by either one.

The RL policy learning examples provided in this tutorial demonstrate the RLlib abilities. For convenience, the `CartPole-v0` OpenAI Gym environment will be used.

Expand All @@ -84,7 +84,7 @@ Begin trainer by importing the `ray` package:
import ray
from ray import tune
```
`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/master/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib.
`Ray` consists of an API readily available for building [distributed applications](https://docs.ray.io/en/code-examples/index.html). On top of it, there are several problem-solving libraries, one of which is RLlib.

`Tune` is also one of `Ray`'s libraries for scalable hyperparameter tuning. All RLlib trainers (scripts for RL agent training) are compatible with Tune API, making experimenting easy and streamlined.

Expand Down Expand Up @@ -139,7 +139,7 @@ tune.run(
```
The RLlib trainer is ready!

Except the aforementioned default hyperparameters, [every RL algorithm](https://docs.ray.io/en/master/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance.
Except the aforementioned default hyperparameters, [every RL algorithm](https://docs.ray.io/en/code-examples/rllib-algorithms.html#available-algorithms-overview) provided by RLlib has its own hyperparameters and their default values that can be tuned in advance.

The code of the trainer in this example can be found [in the tutorial repo](https://github.com/erskordi/HPC/blob/HPC-RL/languages/python/openai_rllib/simple-example/simple_trainer.py).

Expand Down Expand Up @@ -402,7 +402,7 @@ Function `register_env` takes two arguments:
env_name = "custom-env"
register_env(env_name, lambda config: BasicEnv())
```
Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/master/rllib-env.html) of how `register_env` works.
Once again, RLlib provides [detailed explanation](https://docs.ray.io/en/code-examples/rllib-env.html) of how `register_env` works.

The `tune.run` function, instead of `args.name_env`, it uses the `env_name` defined above.

Expand Down
18 changes: 9 additions & 9 deletions docs/Documentation/Software_Tools/Jupyter/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,17 +160,17 @@ Automation makes life better!

### Auto-launching with an sbatch script

Full directions included in the [Jupyter repo](https://github.com/NREL/HPC/tree/master/general/Jupyterhub/jupyter).
Full directions included in the [Jupyter repo](https://github.com/NREL/HPC/tree/code-examples/general/Jupyterhub/jupyter).

Download [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/sbatch_jupyter.sh) and [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/auto_launch_jupyter.sh)
Download [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/sbatch_jupyter.sh) and [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/auto_launch_jupyter.sh)

Edit [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/sbatch_jupyter.sh) to change:
Edit [sbatch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/sbatch_jupyter.sh) to change:

`--account=*yourallocation*`

`--time=*timelimit*`

Run [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/master/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) and follow directions
Run [auto_launch_jupyter.sh](https://github.com/NREL/HPC/blob/code-examples/general/Jupyterhub/jupyter/auto_launch_jupyter.sh) and follow directions

That's it!

Expand Down Expand Up @@ -278,17 +278,17 @@ You can also run shell commands inside a cell. For example:

[Awesome Jupyterlab](https://github.com/mauhai/awesome-jupyterlab)

[Plotting with matplotlib](https://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/master/Lecture-4-Matplotlib.ipynb)
[Plotting with matplotlib](https://nbviewer.jupyter.org/github/jrjohansson/scientific-python-lectures/blob/code-examples/Lecture-4-Matplotlib.ipynb)

[Python for Data Science](https://nbviewer.jupyter.org/github/gumption/Python_for_Data_Science/blob/master/Python_for_Data_Science_all.ipynb)
[Python for Data Science](https://nbviewer.jupyter.org/github/gumption/Python_for_Data_Science/blob/code-examples/Python_for_Data_Science_all.ipynb)

[Numerical Computing in Python](https://nbviewer.jupyter.org/github/phelps-sg/python-bigdata/blob/master/src/main/ipynb/numerical-slides.ipynb)
[Numerical Computing in Python](https://nbviewer.jupyter.org/github/phelps-sg/python-bigdata/blob/code-examples/src/main/ipynb/numerical-slides.ipynb)

[The Sound of Hydrogen](https://nbviewer.jupyter.org/github/Carreau/posts/blob/master/07-the-sound-of-hydrogen.ipynb)
[The Sound of Hydrogen](https://nbviewer.jupyter.org/github/Carreau/posts/blob/code-examples/07-the-sound-of-hydrogen.ipynb)

[Plotting Pitfalls](https://anaconda.org/jbednar/plotting_pitfalls/notebook)

[GeoJSON Extension](https://github.com/jupyterlab/jupyter-renderers/tree/master/packages/geojson-extension)
[GeoJSON Extension](https://github.com/jupyterlab/jupyter-renderers/tree/code-examples/packages/geojson-extension)


## Happy Notebooking!
Expand Down
2 changes: 1 addition & 1 deletion docs/Documentation/Systems/Swift/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -365,7 +365,7 @@ ml openmpi gcc vasp

#### get input and set it up
#### This is from an old benchmark test
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2

mkdir $SLURM_JOB_ID
cp input/* $SLURM_JOB_ID
Expand Down
44 changes: 22 additions & 22 deletions docs/Documentation/Systems/Vermillion/running.md
Original file line number Diff line number Diff line change
Expand Up @@ -355,9 +355,9 @@ There are actually several builds of Vasp on Vermilion, including builds of VASP

The run times and additional information can be found in the file /nopt/nrel/apps/210929a/example/vasp/versions. The run on the GPU nodes is considerably faster than the CPU node runs.

The data set for these runs is from a standard NREL vasp benchmark. See [https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2]() This is a system of 519 atoms (Ag504C4H10S1).
The data set for these runs is from a standard NREL vasp benchmark. See [https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2]() This is a system of 519 atoms (Ag504C4H10S1).

There is a NREL report that discuss running the this test case and also a smaller test case with with various setting of nodes, tasks-per-nodes and OMP_NUM_THREADS. It can be found at: [https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202](https://github.com/NREL/HPC/tree/master/applications/vasp/Performance%20Study%202)
There is a NREL report that discuss running the this test case and also a smaller test case with with various setting of nodes, tasks-per-nodes and OMP_NUM_THREADS. It can be found at: [https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202](https://github.com/NREL/HPC/tree/code-examples/applications/vasp/Performance%20Study%202)

### Running multi-node VASP jobs on Vermilion

Expand Down Expand Up @@ -446,15 +446,15 @@ ml wget

#### get input and set it up
#### This is from an old benchmark test
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2


mkdir input

wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS

# mpirun is recommended (necessary for multi-node calculations)
I_MPI_OFI_PROVIDER=tcp mpirun -iface ens7 -np 16 vasp_std
Expand Down Expand Up @@ -536,15 +536,15 @@ ml wget

#### get input and set it up
#### This is from an old benchmark test
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2


mkdir input

wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS

# mpirun is recommended (necessary for multi-node calculations)
I_MPI_OFI_PROVIDER=tcp mpirun -iface ens7 -np 16 vasp_std
Expand Down Expand Up @@ -621,15 +621,15 @@ ml wget

#### get input and set it up
#### This is from an old benchmark test
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2


mkdir input

wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS

srun --mpi=pmi2 -n 16 vasp_std

Expand Down Expand Up @@ -701,15 +701,15 @@ ml wget

#### get input and set it up
#### This is from an old benchmark test
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/master/bench2
#### see https://github.nrel.gov/ESIF-Benchmarks/VASP/tree/code-examples/bench2


mkdir input

wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/master/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/INCAR?token=AAAALJZRV4QFFTS7RC6LLGLBBV67M -q -O INCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POTCAR?token=AAAALJ6E7KHVTGWQMR4RKYTBBV7SC -q -O POTCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/POSCAR?token=AAAALJ5WKM2QKC3D44SXIQTBBV7P2 -q -O POSCAR
wget https://github.nrel.gov/raw/ESIF-Benchmarks/VASP/code-examples/bench2/input/KPOINTS?token=AAAALJ5YTSCJFDHUUZMZY63BBV7NU -q -O KPOINTS

mpirun -npernode 1 vasp_std > vasp.$SLURM_JOB_ID
```
Expand Down
4 changes: 2 additions & 2 deletions docs/Documentation/languages/bash/bash-starter.md
Original file line number Diff line number Diff line change
Expand Up @@ -227,9 +227,9 @@ see `help declare` at the command line for more information on types that can be
Further Resources
--------------------------

[NREL HPC Github](https://github.com/NREL/HPC/tree/master/general/beginner/bash) - User-contributed bash script and examples that you can use on HPC systems.
[NREL HPC Github](https://github.com/NREL/HPC/tree/code-examples/general/beginner/bash) - User-contributed bash script and examples that you can use on HPC systems.

[BASH cheat sheet](https://github.com/NREL/HPC/blob/master/general/beginner/bash/cheatsheet.sh) - A concise and extensive list of example commands, built-ins, control structures, and other useful bash scripting material.
[BASH cheat sheet](https://github.com/NREL/HPC/blob/code-examples/general/beginner/bash/cheatsheet.sh) - A concise and extensive list of example commands, built-ins, control structures, and other useful bash scripting material.



Expand Down
Loading