Skip to content

Commit

Permalink
fix: add inst for preinstalled (#83)
Browse files Browse the repository at this point in the history
  • Loading branch information
srodenhuis authored Nov 1, 2024
1 parent 88039a8 commit a4e89fd
Show file tree
Hide file tree
Showing 9 changed files with 163 additions and 36 deletions.
2 changes: 1 addition & 1 deletion docs/for-devs/console/deploy-changes.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ title: Deploy changes
sidebar_label: Deploy Changes
---

When a self-service form (Build, Workload, Service, Backup, Secret) is submitted, a commit in the `otomi-values` Git repository will be prepared. When the commit is prepared, the `Deploy Changes` button in the top of the left menu will become active. To commit your changes, click on the Deploy Changes button.
When a self-service form (Build, Workload, Service, Backup, Secret) is submitted, a commit in the `values` Git repository will be prepared. When the commit is prepared, the `Deploy Changes` button in the top of the left menu will become active. To commit your changes, click on the Deploy Changes button.
17 changes: 10 additions & 7 deletions docs/for-ops/console/settings/backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,16 +17,20 @@ The Backup section provides the option to:

### Database Backups

Select to backup the database of the app.
Select to backup the database of the app.

| Setting | Description |
| ------------- | ----------- |
| Enabled | Select to enable the backup of Otomi platform services |
| TTL After Finished | Expiration of the backup. |
| Schedule | Cron-type expression to schedule the backup. Defaults to once a day at 00:00. |
| Setting | Description |
| ------------------ | ----------------------------------------------------------------------------- |
| Enabled | Select to enable the backup of Otomi platform services |
| TTL After Finished | Expiration of the backup. |
| Schedule | Cron-type expression to schedule the backup. Defaults to once a day at 00:00. |

### Persistent Volume Backups

:::info
The Persistent Volume Backups section will not be visible when the installation is done by Akamai Connected Cloud. This is because using Velero is not (yet) supported for Linode Volumes.
:::

:::note
To use Velero to create backups of Persistent Volumes, Object Storage needs to be enabled.
:::
Expand All @@ -38,4 +42,3 @@ To use Velero to create backups of Persistent Volumes in Linode:
2. Fill in the token and submit changes.

3. Go to the apps section in the left menu and enable Velero.

6 changes: 4 additions & 2 deletions docs/for-ops/console/settings/dns.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,14 @@ title: Platform settings
sidebar_label: DNS
---

:::info
The DNS section in the Settings will not be visible when the installation is done by Akamai Connected Cloud. In this case the DNS configuration is managed by Akamai.
:::

## DNS

:::note

DNS settings will only be active when `otomi.hasExternalDNS=true` flag is set during installation. This can also be set after installation in Settings/General.

:::

### Zones
Expand Down
6 changes: 5 additions & 1 deletion docs/for-ops/console/settings/ingress.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ title: Platform settings
sidebar_label: Ingress
---

:::info
The Ingress section in the Settings will not be visible when the installation is done by Akamai Connected Cloud. In this case you will not be able to create multiple ingress classes.
:::

## Ingress

By default (after installation), one ingress controller (ingress-nginx-platform) is deployed and is used to publicly expose both platform and user created services. In the settings for ingress, an admin can:
Expand All @@ -16,4 +20,4 @@ By changing the platform ingress class from public to private, all platform serv

By adding additional ingress classes, each class will get a dedicated ingress controller and a dedicated cloud load balancer. This allows grouping of services and exposing them to differend networks.

Read more about configuring ingress classes [here](../../how-to/ingress-classes.md).
Read more about configuring ingress classes [here](../../how-to/ingress-classes.md).
8 changes: 6 additions & 2 deletions docs/for-ops/console/settings/key-management.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,12 @@ title: Platform settings
sidebar_label: Key Management
---

:::info
The Key Management section in the Settings will not be visible when the installation is done by Akamai Connected Cloud. In this case Age is used as KMS.
:::

## Key management

The Key management settings section offers configuration options for the used Key Management Service (KMS) information needed to seal and unseal sensitive information in the Values repository. At least one key is required. It needs one for encrypting/decrypting the `otomi-values` repo.
The Key management settings section offers configuration options for the used Key Management Service (KMS) information needed to seal and unseal sensitive information in the Values repository. At least one key is required. It needs one for encrypting/decrypting the `values` repo.

It is advised to provide credentials to an external stable KMS, so that unseal keys can always be managed from one central location.
It is advised to provide credentials to an external stable KMS, so that unseal keys can always be managed from one central location.
29 changes: 15 additions & 14 deletions docs/for-ops/console/settings/obj.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,10 @@ Object Storage needs to be configured to be able to use Velero and create databa
To prevent loss of data it is advised to configure Object Storage before activating apps that use Object Storage (like Loki, Harbor and Tempo).
:::

:::info
The Local Minio provider will not be visible when the installation is done by Akamai Connected Cloud.
:::

### Providers

Select between one of the following Object Storage Providers:
Expand All @@ -26,23 +30,20 @@ Select between one of the following Object Storage Providers:

### Linode

| Setting | Description |
| ------- | ----------- |
| region| The name of the Linode region (Datacenter ID) where the buckets are created. See [here](https://techdocs.akamai.com/cloud-computing/docs/access-buckets-and-files-through-urls#cluster-url-s3-endpoint) for all Linode Datacenter IDs |
| accessKeyId | The Id of the access key with read/write permissions for all buckets |
| secretAccessKey | The secret of the access key used |
| Setting | Description |
| --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| region | The name of the Linode region (Datacenter ID) where the buckets are created. See [here](https://techdocs.akamai.com/cloud-computing/docs/access-buckets-and-files-through-urls#cluster-url-s3-endpoint) for all Linode Datacenter IDs |
| accessKeyId | The Id of the access key with read/write permissions for all buckets |
| secretAccessKey | The secret of the access key used |

### Buckets

The preferred bucket names to be used for each app

| Bucket | Description |
| ------- | ----------- |
| loki | Bucket used to store Loki logs |
| cnpg | Bucket used to store backups of databases |
| velero | Bucket used to store backups of Perstent Volumes by Velero |
| Bucket | Description |
| ------ | ----------------------------------------------------------- |
| loki | Bucket used to store Loki logs |
| cnpg | Bucket used to store backups of databases |
| velero | Bucket used to store backups of Perstent Volumes by Velero |
| harbor | Bucket used to store images of private registries in Harbor |
| tempo | Bucket used to store Tempo traces |



| tempo | Bucket used to store Tempo traces |
2 changes: 1 addition & 1 deletion docs/for-ops/how-to/core-only.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ title: Use Core only
sidebar_label: Use Core only
---

By default Gitea, Tekton, Argo CD, the platform API and the Console are installed. The Console is the self-service UI and uses the platform API to generate validated configuration code. This configuration code is then committed to Gitea (in the `otomi-values` repository), which will trigger the pre-configured Tekton pipeline to apply the changes.
By default Gitea, Tekton, Argo CD, the platform API and the Console are installed. The Console is the self-service UI and uses the platform API to generate validated configuration code. This configuration code is then committed to Gitea (in the `values` repository), which will trigger the pre-configured Tekton pipeline to apply the changes.

In some cases you might not want to use the Console and the platform API, but instead install and manage configuration of the platform using a custom pipeline. Possible use-cases for this scenario are:

Expand Down
2 changes: 1 addition & 1 deletion docs/for-ops/sre/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ For advanced use-cases, configuration values of all integrated open source appli

The APL values can be overridden by custom configuration values using `_rawValues`. Custom configuration values can be all values supported by the upstream Helm chart of the integrated open source application in APL Core.

SRE's can use APL Console to change configuration settings (like security policies), but can also change the APL values directly using the APL values schema and by using overrides. In all cases, the configuration is stored in code (the otomi-values repository).
SRE's can use APL Console to change configuration settings (like security policies), but can also change the APL values directly using the APL values schema and by using overrides. In all cases, the configuration is stored in code (the `values` repository).

The following code shows the configuration values of the ingress-nginx chart.

Expand Down
127 changes: 120 additions & 7 deletions docs/get-started/installation/akamai-cloud.md
Original file line number Diff line number Diff line change
@@ -1,17 +1,130 @@
---
slug: akamai-cloud
title: Akamai Connected Cloud
sidebar_label: Akamai Cloud
sidebar_label: Akamai Connected Cloud
---

## Comming soon!
:::info
Coming soon!
:::

Soon you'll be able to provision a LKE cluster combined with the Application Platform. This will have the following advantages compared to installing the Application Platform manually:
## Create an LKE cluster

- The Application Platform will be automatically installed for you on LKE.
1. Log into your Cloud Manager account.

- Each LKE cluster where the Application Platform is installed will automatically be configured to use Akamai Edge DNS (for FREE!).
2. Select Kubernetes from the left navigation menu and then click Create Cluster.

- If approved by the user, all the required Buckets in Object Storage will be automatically created in your account.

Like to be the first to try it out? Sign up now for the [Beta Program](https://cloud.linode.com/betas)!
3. The Create a Kubernetes Cluster page appears. At the top of the page, you are required to select the following options


- In the Cluster Label field, provide a name for your cluster. The name must be unique between all of the clusters on your account. This name is how you identify your cluster in Cloud Manager’s Dashboard.


- From the Region dropdown menu, select the Region where you would like your cluster to reside.


- From the Version dropdown menu, select a Kubernetes version to deploy to your cluster.


4. In the Application Platform for LKE section, select “Yes, enable Application Platform for LKE”

:::note
The Application Platform for LKE requires HA control plane to be enabled. When Application Platform for LKE is enabled, HA control plane will automatically be enabled.
:::

5. In the Add Node Pools section, select the hardware resources for the Linode worker node(s) that make up your LKE cluster. To the right of each plan, select the plus + and minus - to add or remove a Linode to a node pool one at time.

:::note
The Application Platform for LKE requires a node pool with at least 3 worker nodes with a total minimum of 16 GB memory and 12 CPUs. Linode plans that do not provide the minimal required resources can not be selected.
:::

6. Select Add to include the node pool in your configuration. If you decide that you need more hardware resources after you deploy your cluster, you can always [edit your Node Pool](https://techdocs.akamai.com/cloud-computing/docs/manage-nodes-and-node-pools).

7. Once a pool has been added to your configuration, it is listed in the Cluster Summary on the right-hand side of Cloud Manager detailing your cluster's hardware resources and monthly cost. Additional pools can be added before finalizing the cluster creation process by repeating the previous step for each additional pool.

8. When you are satisfied with the configuration of your cluster, click the Create Cluster button on the right hand side of the screen. Your cluster's detail page appears, and your Node Pools are listed on this page. First the LKE cluster will be created and once ready the Application Platform for LKE will be installed. The installation of the Application Platform for LKE takes around 10 until 15 minutes. When the installation is finished, the URL of the Portal Endpoint will appear in the Application Platform for LKE section. The progress of the installation will be checked every 60 seconds. When the installation is still in progress, the URL of the Portal Endpoint will not be displayed. Instead the message “Installation in progress” will appear.


9. When the installation of both the LKE cluster and the Application Platform is ready, click on the provided URL of the portal Endpoint. You will then see the following sign-in page:

10. Continue with the next steps to get the initial credentials needed to sign in.

## Access and download your Kubeconfig

1. To access your cluster's Kubeconfig, log in to your Cloud Manager account and navigate to the Kubernetes section.

2. From the Kubernetes listing page, click on your cluster's more options ellipsis and select Download Kubeconfig. The file is saved to your computer's Downloads folder.

3. Open a terminal shell and save your Kubeconfig file's path to the $KUBECONFIG environment variable. In the example command, the Kubeconfig file is located in the Downloads folder, but you should alter this line with this folder's location on your computer:

```bash
export KUBECONFIG=~/Downloads/kubeconfig.yaml
```

## Obtain the initial access credentials and sign in

1. Perform the following command to obtain the user name:

```bash
kubectl get secret platform-admin-initial-credentials -n keycloak --template={{.data.username}} | base64 -d
```

2. Perform the following command to obtain the initial password:

```bash
kubectl get secret platform-admin-initial-credentials -n keycloak --template={{.data.password}} | base64 -d
```

3. Copy the username and password to your clipboard.

4. Sign in to the Console with the provided username and initial password.

5. Change the initial password

## Provision Object Storage for the Application Platform

When signed in to the Console (the web UI of the Application Platform), the first thing you’ll need to do is configure Object Storage. A wizard will be displayed asking you if the Application Platform should provision all the required Buckets and access key for you. This is not required, but strongly recommended as this will prevent `out of disk space errors` when using Storage Volumes for integrated applications. Using Object Storage also has the advantage to create backups of all databases used by the platform.

1. When asked to create all the required Buckets and access key, click Yes. If you don’t want the platform to create all the required buckets, then click Skip. Note that in this case some features like creating backups of databases will not be available. You can start the Wizard at any time in the Console (Platform View: Maintenance, Show Object Storage Wizard).

2. Follow the instructions to [create a Personal Access Token](https://techdocs.akamai.com/linode-api/reference/get-started#personal-access-tokens) and make sure to select **Read/Write** for the Object Storage category and **Read** for the Kubernetes category. Copy the Access Token.

3. Now paste the Access Token into the wizard and click Submit.

All the required Buckets and Access Key will now be created in your account and the platform will be configured to use Object Storage to store persistent data and backups. The provided Personal Access Token will not be stored. The created buckets will have the `<cluster-id>` prefix`

## Known issues

If the URL of the Portal Endpoint does not appear in the Application Platform for LKE section after 30 minutes, then the following may be the issue:

### Installation gets stuck because of a quota exceeded exception

Next to the resources required for LKE, the Application Platform also uses a NodeBalancer and a minimum of 11 Storage Volumes. This might result in a quota exceeding exception. Linode currently does not provide quota limits in your account details at this time.

The following issue might be related to quota exceeding exception:

Pods that require a Storage Volume get stuck in a pending state with the following message:

`pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling.`

Resolution:

- Remove any Storage Volumes that are Unattached.

- If you would like to know your account's limits or want to increase the number of entities you can create, the best way is to get that information through a support ticket.

### The Let’s Encrypt secret request was not successful
For each cluster with the Application Platform for LKE enabled, a Let’s Encrypt certificate will be requested. If the certificate is not ready within 30 minutes, the installation of the Application Platform will fail. Run the following command to see if the certificate is created:

```bash
kubectl get secret -n istio-system
```

There should be a secret called: `apl-<cluster-id>-wildcard-cert`

If this secret is not present, then the request failed.

Resolution:

- Delete the LKE cluster with Application Platform for LKE enabled and create a new cluster with Application Platform for LKE enabled

0 comments on commit a4e89fd

Please sign in to comment.