Skip to content

Commit

Permalink
Update action
Browse files Browse the repository at this point in the history
  • Loading branch information
G-071 committed Mar 11, 2024
1 parent 304e8f1 commit 66e10c5
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 1 deletion.
2 changes: 1 addition & 1 deletion .github/workflows/documentation.yml
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ jobs:
steps:
# checkout repository
- name: Checkout cppuddle
uses: actions/checkout@v2
uses: actions/checkout@v4
with:
path: cppuddle
# install dependencies
Expand Down
2 changes: 2 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@ In this use-case, allocating GPU buffers for all sub-grids in advance would have
- Executor pools and various scheduling policies (round robin, priority queue, multi-gpu), which rely on reference counting to gauge the current load of a executor instead of querying the device itself. Tested with CUDA, HIP and Kokkos executors provided by HPX / HPX-Kokkos.
- Special Executors/Allocators for on-the-fly work GPU aggregation (using HPX).

The documentation is available [here](https://sc-sgs.github.io/CPPuddle/index.html). In particular, the public functionality for the memory recycling in available in the namespace [memory_recycling](https://sc-sgs.github.io/CPPuddle/namespacecppuddle_1_1memory__recycling.html), for the executor pools it is available in the namespace [executor_recycling](https://sc-sgs.github.io/CPPuddle/namespacecppuddle_1_1executor__recycling.html) and the work aggregation (kernel fusion) functionality is available in the namespace [work_aggregation](https://sc-sgs.github.io/CPPuddle/namespacecppuddle_1_1kernel__aggregation.html).

#### Requirements

- C++17
Expand Down

0 comments on commit 66e10c5

Please sign in to comment.