Skip to content

Commit

Permalink
Merge pull request #4 from fabi200123/small-nits
Browse files Browse the repository at this point in the history
Update docs
  • Loading branch information
gabriel-samfira authored Aug 8, 2024
2 parents a12ad2f + b9aafbe commit 82d70d2
Show file tree
Hide file tree
Showing 4 changed files with 17 additions and 17 deletions.
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@ The goal of ```GARM``` is to be simple to set up, simple to configure and simple

GARM supports creating pools in either GitHub itself or in your own deployment of [GitHub Enterprise Server](https://docs.github.com/en/enterprise-server@3.10/admin/overview/about-github-enterprise-server). For instructions on how to use ```GARM``` with GHE, see the [credentials](/doc/github_credentials.md) section of the documentation.

Through the use of providers, `GARM` can create runners in a variety of environments using the same `GARM` instance. Whether you want to create pools of runners in your OpenStack cloud, your Azure cloud or your Kubernetes cluster, that is easily achieved by just installing the appropriate providers, configuring them in `GARM` and creating pools that use them. You can create zero-runner pools for instances with high costs (large VMs, GPU enabled instances, etc) and have them spin up on demand, or you can create large pools of eagerly creaated k8s backed runners that can be used for your CI/CD pipelines at a moment's notice. You can mix them up and create pools in any combination of providers or resource allocations you want.
Through the use of providers, `GARM` can create runners in a variety of environments using the same `GARM` instance. Whether you want to create pools of runners in your OpenStack cloud, your Azure cloud or your Kubernetes cluster, that is easily achieved by just installing the appropriate providers, configuring them in `GARM` and creating pools that use them. You can create zero-runner pools for instances with high costs (large VMs, GPU enabled instances, etc) and have them spin up on demand, or you can create large pools of eagerly created k8s backed runners that can be used for your CI/CD pipelines at a moment's notice. You can mix them up and create pools in any combination of providers or resource allocations you want.

Here is a brief architectural diagram of how GARM reacts to workflows triggered in GitHub (click the image to see a larger version):

Expand Down
8 changes: 4 additions & 4 deletions doc/config.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,9 +94,9 @@ For example, in a scenario where you expose the API endpoint directly, this sett
callback_url = "https://garm.example.com/api/v1/callbacks"
```

Authentication is done using a short-lived JWT token, that gets generated for a particular instance that we are spinning up. That JWT token grants access to the instance to only update it's own status and to fetch metadata for itself. No other API endpoints will work with that JWT token. The validity of the token is equal to the pool bootstrap timeout value (default 20 minutes) plus the garm polling interval (5 minutes).
Authentication is done using a short-lived JWT token, that gets generated for a particular instance that we are spinning up. That JWT token grants access to the instance to only update its own status and to fetch metadata for itself. No other API endpoints will work with that JWT token. The validity of the token is equal to the pool bootstrap timeout value (default 20 minutes) plus the garm polling interval (5 minutes).

There is a sample ```nginx``` config [in the testdata folder](/testdata/nginx-server.conf). Feel free to customize it whichever way you see fit.
There is a sample ```nginx``` config [in the testdata folder](/testdata/nginx-server.conf). Feel free to customize it in any way you see fit.

### The metadata_url option

Expand Down Expand Up @@ -128,7 +128,7 @@ And restart garm. You can then use the following command to start profiling:
go tool pprof http://127.0.0.1:9997/debug/pprof/profile?seconds=120
```

Important note on profiling when behind a reverse proxy. The above command will hang for a fairly long time. Most reverse proxies will timeout after about 60 seconds. To avoid this, you should only profile on localhost by connecting directly to garm.
> **IMPORTANT NOTE on profiling when behind a reverse proxy**: The above command will hang for a fairly long time. Most reverse proxies will timeout after about 60 seconds. To avoid this, you should only profile on localhost by connecting directly to garm.
It's also advisable to exclude the debug server URLs from your reverse proxy and only make them available locally.

Expand Down Expand Up @@ -289,7 +289,7 @@ If you want to implement an external provider, you can use this file for anythin

#### Available external providers

For non testing purposes, there are two external providers currently available:
For non-testing purposes, these are the external providers currently available:

* [OpenStack](https://github.com/cloudbase/garm-provider-openstack)
* [Azure](https://github.com/cloudbase/garm-provider-azure)
Expand Down
6 changes: 3 additions & 3 deletions doc/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ docker run -d \
ghcr.io/cloudbase/garm:v0.1.4
```

You will notice we also mounted the LXD unix socket from the host inside the container where the config you pasted expects to find it. If you plan to use an external provider that does not need to connect to LXD over a unix socket, feel free to remove that mount.
You will notice that we also mounted the LXD unix socket from the host inside the container where the config you pasted expects to find it. If you plan to use an external provider that does not need to connect to LXD over a unix socket, feel free to remove that mount.

Check the logs to make sure everything is working as expected:

Expand Down Expand Up @@ -333,7 +333,7 @@ In this exampe, we add a new github endpoint called `example`. The `ca-cert-path

Before we can add a new entity, we need github credentials to interact with that entity (manipulate runners, create webhooks, etc). Credentials are tied to a specific github endpoint. In this section we'll be adding credentials that are valid for either [github.com](https://github.com) or your own GHES server (if you added one in the previous section).

When creating a new entity (repo, org, enterprise) using the credentials you define here, GARM will automatically associate that entity with the gitHub endpoint that the credentials use.
When creating a new entity (repo, org, enterprise) using the credentials you define here, GARM will automatically associate that entity with the github endpoint that the credentials use.

If you want to swap the credentials for an entity, the new credentials will need to be associated with the same endpoint as the old credentials.

Expand Down Expand Up @@ -620,6 +620,6 @@ gabriel@rossak:~$ garm-cli job ls

There are no jobs sent yet to my GARM install, but once you start sending jobs, you'll see them here as well.

That's it! You now have a working GARM installation. You can add more repos, orgs or enterprises and create more pools. You can also add more providers for different clouds and credentials with access to different GitHub resources.
That's it! Now you have a working GARM installation. You can add more repos, orgs or enterprises and create more pools. You can also add more providers for different clouds and credentials with access to different GitHub resources.

Check out the [Using GARM](/doc/using_garm.md) guide for more details on how to use GARM.
18 changes: 9 additions & 9 deletions doc/using_garm.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ While using the GARM cli, you will most likely spend most of your time listing p

## Controller operations

The `controller` is essentially GARM itself. Every deployment of GARM will have its own controller ID which will be used to tag runners in github. The controller is responsible for managing runners, webhooks, repositories, organizations and enterprises. There are a few settings at the controller level which you can tweak and we will cover them below.
The `controller` is essentially GARM itself. Every deployment of GARM will have its own controller ID which will be used to tag runners in github. The controller is responsible for managing runners, webhooks, repositories, organizations and enterprises. There are a few settings at the controller level which you can tweak, which we will cover below.

### Listing controller info

Expand Down Expand Up @@ -85,7 +85,7 @@ We will see the `Controller Webhook URL` later when we set up the GitHub repo to

### Updating controller settings

Like we've mentioned before, there are 3 URLs that are very important for normal operations:
As we've mentioned before, there are 3 URLs that are very important for normal operations:

* `metadata_url` - Must be reachable by runners
* `callback_url` - Must be reachable by runners
Expand Down Expand Up @@ -145,7 +145,7 @@ Each of these providers can be used to set up a runner pool for a repository, or

GARM can be used to manage runners for repos, orgs and enterprises hosted on `github.com` or on a GitHub Enterprise Server.

Endpoints are the way that GARM identifies where the credentials and entities you create are located and where the API endpoints for the GitHub API can be reached, along with a possible CA certificate that validates the connection. There is a default endpoint for `github.com`, so you don't need to add it. But if you're using GHES, you'll need to add an endpoint for it.
Endpoints are the way that GARM identifies where the credentials and entities you create are located and where the API endpoints for the GitHub API can be reached, along with a possible CA certificate that validates the connection. There is a default endpoint for `github.com`, so you don't need to add it, unless you're using GHES.

### Creating a GitHub Endpoint

Expand Down Expand Up @@ -241,7 +241,7 @@ There are two types of credentials:
* PAT - Personal Access Token
* App - GitHub App

To add each of these types of credentials requires slightly different command line arguments (obviously). I'm going to give you an example of both.
To add each of these types of credentials, slightly different command line arguments (obviously) are required. I'm going to give you an example of both.

To add a PAT, you can run the following command:

Expand Down Expand Up @@ -318,7 +318,7 @@ To delete a credential, you can run the following command:
garm-cli github credentials delete 2
```

Note, you may not delete credentials that are currently associated with a repository, organization or enterprise. You will need to first replace the credentials on the entity, and then you can delete the credentials.
> **NOTE**: You may not delete credentials that are currently associated with a repository, organization or enterprise. You will need to first replace the credentials on the entity, and then you can delete the credentials.
## Repositories

Expand Down Expand Up @@ -381,7 +381,7 @@ garm-cli repository delete be3a0673-56af-4395-9ebf-4521fea67567

This will remove the repository from GARM, and if a webhook was installed, will also clean up the webhook from the repository.

Note: GARM will not remove a webhook that points to the `Base Webhook URL`. It will only remove webhooks that are namespaced to the running controller.
> **NOTE**: GARM will not remove a webhook that points to the `Base Webhook URL`. It will only remove webhooks that are namespaced to the running controller.
## Organizations

Expand All @@ -407,9 +407,9 @@ ubuntu@garm:~$ garm-cli organization add \

This will add the organization `gsamfira` to GARM, and install a webhook for it. The webhook will be validated against the secret that was generated. The only difference between adding an organization and adding a repository is that you use the `organization` subcommand instead of the `repository` subcommand, and the `--name` option represents the `name` of the organization.

Managing webhooks for organizations is similar to managing webhooks for repositories. You can list, show, install and uninstall webhooks for organizations using the `garm-cli organization webhook` subcommand. We won't go into details here, as it's similar to managing webhooks for repositories.
Managing webhooks for organizations is similar to managing webhooks for repositories. You can *list*, *show*, *install* and *uninstall* webhooks for organizations using the `garm-cli organization webhook` subcommand. We won't go into details here, as it's similar to managing webhooks for repositories.

All the other operations that exist on repositories, like listing, removing, etc, also exist for organizations and enterprises. Have a look at the help for the `garm-cli organization` subcommand for more details.
All the other operations that exist on repositories, like listing, removing, etc, also exist for organizations and enterprises. Check out the help for the `garm-cli organization` subcommand for more details.

## Enterprises

Expand Down Expand Up @@ -497,7 +497,7 @@ To manually add a webhook, see the [webhooks](/doc/webhooks.md) section.

Now that we have a repository, organization or enterprise added to GARM, we can create a runner pool for it. A runner pool is a collection of runners of the same type, that are managed by GARM and are used to run workflows for the repository, organization or enterprise.

You can create multiple pools of runners for the same entity (repository, organization or enterprise), and you can create multiple pools of runners, each pool defining different runner types. For example, you can have a pool of runners that are created on AWS, and another pool of runners that are created on Azure, k8s, LXD, etc. For repositories or organizations with complex needs, you can set up a number of pools that cover a wide range of needs, based on cost, capability (GPUs, FPGAs, etc) or sheer raw computing power. You don't have to pick just one and managing all of them is done using the exact same commands, as we'll show below.
You can create multiple pools of runners for the same entity (repository, organization or enterprise), and you can create multiple pools of runners, each pool defining different runner types. For example, you can have a pool of runners that are created on AWS, and another pool of runners that are created on Azure, k8s, LXD, etc. For repositories or organizations with complex needs, you can set up a number of pools that cover a wide range of needs, based on cost, capability (GPUs, FPGAs, etc) or sheer raw computing power. You don't have to pick just one, especially since managing all of them is done using the exact same commands, as we'll show below.

Before we create a pool, we have to decide which provider we want to use. We've listed the providers above, so let's pick one and create a pool of runners for our repository. For the purpose of this example, we'll use the `incus` provider. We'll show you how to create a pool using this provider, but keep in mind that adding another pool using a different provider is done using the exact same commands. The only difference will be in the `--image`, `--flavor` and `--extra-specs` options that you'll use when creating the pool.

Expand Down

0 comments on commit 82d70d2

Please sign in to comment.