diff --git a/docs/apl/introduction.md b/docs/apl/introduction.md index afa2de95f..6478d95f3 100644 --- a/docs/apl/introduction.md +++ b/docs/apl/introduction.md @@ -6,8 +6,8 @@ sidebar_label: Introduction ## Application Platform for LKE -Application Platform for LKE (you will see us use the abbreviation `APL`) is a platform that combines developer and operations-centric tools, automation and self-service to streamline the application lifecycle when using Kubernetes. From development to delivery to management of containerized application workloads. +Application Platform for LKE is a platform that combines developer and operations-centric tools, automation and self-service to streamline the application lifecycle when using Kubernetes. From development to delivery to management of containerized application workloads. -Application Platform for LKE connects many of the technologies found in the Cloud Native Computing Foundation (CNCF) landscape in a way to provide direct value. No more re-inventing the wheel when building and maintaining your own Kubernetes based platform or bespoke stack. +The platform connects many of the technologies found in the Cloud Native Computing Foundation (CNCF) landscape in a way to provide direct value. No more re-inventing the wheel when building and maintaining your own Kubernetes based platform or bespoke stack. Application Platform for LKE is optimized to run on Linode Kubernetes Engine (LKE), but can also (manually) be installed on any other [conformant Kubernetes cluster](https://www.cncf.io/training/certification/software-conformance/). diff --git a/docs/apps/alertmanager.md b/docs/apps/alertmanager.md index 6e726b374..779ae8bc7 100644 --- a/docs/apps/alertmanager.md +++ b/docs/apps/alertmanager.md @@ -12,7 +12,7 @@ Alertmanager is configured to use the global values found under settings' [alert A team may decide to override some or all of them, in order to have alerts sent to their own endpoints. Self-service rights to alerting must be enabled for the team (enabled by default for all teams). Each Team can enable a dedicated alertmanager instance. -APL supports the following receivers: +The following receivers are supported: - `Slack` - `Microsoft Teams` diff --git a/docs/apps/argocd.md b/docs/apps/argocd.md index 59c16cec9..aefedd45a 100644 --- a/docs/apps/argocd.md +++ b/docs/apps/argocd.md @@ -6,17 +6,17 @@ sidebar_label: Argo CD ## About -Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Argo CD is configured by APL to use the SSO provided by keycloak, and maps APL groups to Argo CD roles. The `otomi-admin` role is made super admin within Argo CD. The team-admin role has access to Argo CD and is admin of all team projects. Members of team roles are only allowed to administer their own projects. All Teams will automatically get access to a Git repo, and Argo CD is configured to listen to this repo. All a team has to do is to fill their repo with intended state, commit, and automation takes care of the rest. +Argo CD is a declarative, GitOps continuous delivery tool for Kubernetes. Argo CD is configured to use the SSO provided by Keycloak, and maps groups to Argo CD roles. The `platform-admins` role is made super admin within Argo CD. The platform-admins role has access to Argo CD and is admin of all team projects. Members of team roles are only allowed to administer their own projects. All Teams will automatically get access to a Git repo, and Argo CD is configured to listen to this repo. All a team has to do is to fill their repo with intended state, commit, and automation takes care of the rest. Teams will be be automatically given a git repository in Gitea named `team-$teamId-argocd`, and Argo CD is automatically configured to access the repository and sync. All that is left to do is for a team-admin (or team member with self-service rights) to fill their repository with intended state and commit. -Argo CD is configured to use the SSO provided by keycloak, and maps APL groups to Argo CD roles: +Argo CD is configured to use the SSO provided by keycloak, and maps groups to Argo CD roles: -- Group `otomi-admin` is made super admin within Argo CD +- Group `platform-admins` is made super admin within Argo CD -- Group team-admin has access to, and is admin of all team projects +- Group team-admins has access, and is admin of all Team projects -- Team members are only allowed access to, and administer their own projects +- Team members are only allowed access and to administer their own projects Teams will be be automatically given a git repository in Gitea named `team-$teamId-argocd`, and Argo CD is automatically configured to access the repository and sync. All that is left to do is Teams to fill their repository with the intended state (manifests) and commit. diff --git a/docs/apps/certmanager.md b/docs/apps/certmanager.md index 1656e38cc..10bd3dec0 100644 --- a/docs/apps/certmanager.md +++ b/docs/apps/certmanager.md @@ -6,12 +6,12 @@ sidebar_label: Cert-Manager ## About -Cert-Manager is used by APL to automatically create and rotate wildcard TLS certificates for service endpoints. You may bring your own CA, or let APL create one for you. If you bring your own trusted wildcard certificate, then cert-manager will not manage this certificate. +Cert-Manager is used to automatically create and rotate wildcard TLS certificates for service endpoints. You may bring your own CA, or otherwise one is created for you. If you bring your own trusted wildcard certificate, then cert-manager will not manage this certificate. :::info The wildcard certificate must be valid for the following domain `*.`, where the value of `` comes from the cluster.yaml file. ::: :::info -Setting Cert-Manager to use Letsencrypt requires DNS availability of the requesting domains, and forces Otomi to install [ExternalDNS](external-dns.md). Because a lot of DNS settings are used by other APL contexts, all DNS configuration can be found [here](../for-ops/console/settings/dns.md). +Setting Cert-Manager to use Letsencrypt requires DNS availability of the requesting domains, and forces Otomi to install [ExternalDNS](external-dns.md). Because a lot of DNS settings are used by other contexts, all DNS configuration can be found [here](../for-ops/console/settings/dns.md). ::: | diff --git a/docs/apps/cloudnativepg.md b/docs/apps/cloudnativepg.md index d1b483ea1..f6fede8a4 100644 --- a/docs/apps/cloudnativepg.md +++ b/docs/apps/cloudnativepg.md @@ -6,4 +6,4 @@ sidebar_label: Cloudnative Postgresql ## About -CloudNativePG is used by APL to provide PostgreSQL database for APL applications like Harbor and Keycloak. Configure a storage provider to store backups in (external) object storage. Backups can be enabled in the settings. \ No newline at end of file +CloudNativePG is used to provide PostgreSQL database for integrated applications like Harbor and Keycloak. Configure a storage provider to store backups in (external) object storage. Backups can be enabled in the settings. \ No newline at end of file diff --git a/docs/apps/external-dns.md b/docs/apps/external-dns.md index 638dca2ac..147bdaba0 100644 --- a/docs/apps/external-dns.md +++ b/docs/apps/external-dns.md @@ -6,12 +6,12 @@ sidebar_label: ExternalDNS ## About -ExternalDNS is required to make public service domains accessible by registering them with APL's front loadbalancer CNAME or IP address. When it is not enabled (default) APL will instead rely on [nip.io](https://nip.io) to create host names for all services. +ExternalDNS is required to make public service domains accessible by registering them with the loadbalancer IP address. When externalDNS is not enabled, [nip.io](https://nip.io) will be used. The use of ExternalDNS is a prerequisite for using the following features: - Harbor private registries for teams. -- The Builds self-service feature in APL Console (relies on Harbor). +- The Builds self-service feature in the Console (relies on Harbor). -- The Projects self-service feature in APL Console (relies on Harbor). +- The Projects self-service feature in the Console (relies on Harbor). diff --git a/docs/apps/falco.md b/docs/apps/falco.md index 39b4dbb15..7fe5cc37b 100644 --- a/docs/apps/falco.md +++ b/docs/apps/falco.md @@ -18,4 +18,4 @@ Before activating Falco, please first check which [Driver](https://falco.org/doc If you know which driver should be used, activate Falco, go to the `Values`, add the `Driver` and submit changes. Now `Deploy Changes`. -When Falco is installed, APL will add a set of rules to `white-list` all known behaviour. These rules are added using the Raw Values. \ No newline at end of file +When Falco is installed, a set of rules to `white-list` all known behaviour is added. These rules are added using the Raw Values. \ No newline at end of file diff --git a/docs/apps/gitea.md b/docs/apps/gitea.md index 2f1fe7ec7..4804ca186 100644 --- a/docs/apps/gitea.md +++ b/docs/apps/gitea.md @@ -6,4 +6,4 @@ sidebar_label: Gitea ## About -Gitea is a community managed lightweight code hosting solution written in Go. Because APL uses Tekton to deploy changes to the values repo, it needs a git hosting solution. When no source control is configured, APL will deploy Gitea for Tekton to target as a git repo. Gitea may be used for other purposes, and is especially useful in combination with Tekton as a CI/CD solution. Just like APL uses it. +Gitea is a community managed lightweight code hosting solution written in Go. Because Tekton is used to deploy changes to the values repo, it needs a git hosting solution. Gitea is used to host all of the internal Git repositories used by the platform and can also be used for other purposes. diff --git a/docs/apps/grafana.md b/docs/apps/grafana.md index 8a574bb21..f969680bc 100644 --- a/docs/apps/grafana.md +++ b/docs/apps/grafana.md @@ -6,4 +6,4 @@ sidebar_label: Grafana ## About -APL uses Grafana to visualize [Prometheus](prometheus.md) metrics and [Loki](loki.md) logs. Team members are automatically given `Editor` role, while admins are also given `Admin` role. It is possible to make configuration changes directly in Grafana, but only to non-conflicting settings. Data sources are preconfigured and must not be edited as changes will be lost when Grafana is redeployed. \ No newline at end of file +Grafana is used to visualize [Prometheus](prometheus.md) metrics and [Loki](loki.md) logs. Team members are automatically given `Editor` role, while admins are also given `Admin` role. It is possible to make configuration changes directly in Grafana, but only to non-conflicting settings. Data sources are preconfigured and must not be edited as changes will be lost when Grafana is redeployed. \ No newline at end of file diff --git a/docs/apps/harbor.md b/docs/apps/harbor.md index 712c396e5..17728d12b 100644 --- a/docs/apps/harbor.md +++ b/docs/apps/harbor.md @@ -8,7 +8,7 @@ sidebar_label: Harbor Harbor is an open-source registry that secures artifacts with policies and role-based access control, ensures images are scanned and free from vulnerabilities, and signs images as trusted. Harbor delivers compliance, performance, and interoperability to help you consistently and securely manage artifacts across cloud-native compute platforms like Kubernetes. (source: https://goharbor.io/) -APL automates the following Harbor maintanace tasks: +The following Harbor maintenance tasks are automated: - Creating a project in Harbor for each team. diff --git a/docs/apps/ingress-nginx.md b/docs/apps/ingress-nginx.md index 5807c6394..52d101136 100644 --- a/docs/apps/ingress-nginx.md +++ b/docs/apps/ingress-nginx.md @@ -6,7 +6,7 @@ sidebar_label: Ingress-nginx ## About -Ingress NGINX is the default ingress controller in APL and part of the core setup (this means it is not possible use another controller within APL). +Ingress NGINX is the default ingress controller and part of the core setup. ### Using the OWASP rule set diff --git a/docs/apps/istio.md b/docs/apps/istio.md index 44e3d641a..605cde5e4 100644 --- a/docs/apps/istio.md +++ b/docs/apps/istio.md @@ -6,7 +6,7 @@ sidebar_label: Istio ## About -Istio is installed by APL to deliver the following capabilities: +Istio is installed to deliver the following capabilities: - mTLS enforcement for all traffic that is deemed compromisable. diff --git a/docs/apps/jaeger.md b/docs/apps/jaeger.md index 4588deb59..b6ba4c053 100644 --- a/docs/apps/jaeger.md +++ b/docs/apps/jaeger.md @@ -6,4 +6,4 @@ sidebar_label: Jaeger ## About -Jaeger can be activated by APL to gain tracing insights on its network traffic. It runs in anonymous mode and each authenticated user is given the same authorization, allowing them to see everything. In the future this may be limited according to scope such as role and teams. \ No newline at end of file +Jaeger can be activated to gain tracing insights on its network traffic. It runs in anonymous mode and each authenticated user is given the same authorization, allowing them to see everything. In the future this may be limited according to scope such as role and teams. \ No newline at end of file diff --git a/docs/apps/keycloak.md b/docs/apps/keycloak.md index 9f601cd88..759c05bff 100644 --- a/docs/apps/keycloak.md +++ b/docs/apps/keycloak.md @@ -6,15 +6,15 @@ sidebar_label: Keycloak ## About -The SSO login page for APL is served by Keycloak. It is used as an identity broker or provider for all APL integrated applications. Keycloak is configured with mappers that normalize incoming identities from an IDP to have predictable claims format to be used by APL applications. +The SSO login page for all platform services (like the Console) is served by Keycloak. It is used as an identity broker or provider for all integrated applications. Keycloak is configured with mappers that normalize incoming identities from an IDP to have predictable claims format to be used by integrated applications. Keycloak is automatically configured with 3 roles: -- `otomi-admin`: super admin role for all platform configuration and core applications. +- `platform-admins`: super admin role for all platform configuration and core applications. -- `team-admin`: team admin role to manage teams and users. +- `team-admins`: team admin role to manage teams and users. -- `team`: team role for team members. +- `team-members`: team role for team members. Group (team) membership is reflected in the user's 'groups' claim. When this authorization configuration is useful to their own built applications, teams can directly use Keycloak's provided groups and roles claims. There is no need for a client or token validation, as that has been done by the platform. They can do so by turning on the "Authenticate with Single Sign On" checkbox. This then limits the application access to only allow the members of the team. diff --git a/docs/apps/kiali.md b/docs/apps/kiali.md index 3030f90c5..8586aba8f 100644 --- a/docs/apps/kiali.md +++ b/docs/apps/kiali.md @@ -6,4 +6,4 @@ sidebar_label: Kiali ## About -Kiali can be activated in APL to gain observability insights on its network traffic. It runs in anonymous mode and each authenticated user is given the same authorization, allowing them to see everything. \ No newline at end of file +Kiali can be activated to gain insights into Istio. It runs in anonymous mode and each authenticated administrator is given the same authorization, allowing them to see everything. \ No newline at end of file diff --git a/docs/apps/knative.md b/docs/apps/knative.md index 395797866..78b2c73c1 100644 --- a/docs/apps/knative.md +++ b/docs/apps/knative.md @@ -6,4 +6,4 @@ sidebar_label: Knative ## About -Knative can be activated in APL to deliver Container-as-a-Service (CaaS) functionality with scale-to-zero possibility. It can be compared to Functions-as-a-service (FaaS) but is container oriented, and takes only one manifest to configure an autoscaling service based on a container image of choice. APL uses Istio Virtual Services under the hood to route traffic coming in for a public domain to its backing Knative Service, allowing to set a custom domain. \ No newline at end of file +Knative can be activated to deliver Container-as-a-Service (CaaS) functionality with scale-to-zero possibility. It can be compared to Functions-as-a-service (FaaS) but is container oriented, and takes only one manifest to configure an autoscaling service based on a container image of choice. Istio Virtual Services are used to route traffic for a public domain to its backing Knative Service. \ No newline at end of file diff --git a/docs/apps/loki.md b/docs/apps/loki.md index d3f843cb2..1cee22b00 100644 --- a/docs/apps/loki.md +++ b/docs/apps/loki.md @@ -6,16 +6,4 @@ sidebar_label: Loki ## About -Loki aggregates all the container logs from the platform and stores them in a storage endpoint of choice (defaults to PVC). By default APL will split logs from team namespaces and make them available only to team members. APL splits logs per team, installs a dedicated Grafana instance per team and configures authentication for Grafana to allow access for team members only. - -## Known issues - -### Time Range does not show all data - -Unfortunately the Grafana team has not yet solved their long running problems with their LogQL interface. Instead of providing paginated queries to Loki, it is needed to provide a "line limit" by the user manually. - -In a data driven application that has pagination, when a user selects a time window for a data query, the user will not have to provide additional information to perform that query. The UI application takes responsibility for instrumenting the query towards its data backend. It should thus load & render the results either through pagination or by scrolling the time range into view. - -**Solution:** - -When you don't see enough data, try increasing the line limit. The maximum is configurable in the Loki values. +Loki aggregates all the container logs from the platform and stores them in a storage endpoint of choice (defaults to PVC). By default logs from Team namespaces are split and available to team members only. \ No newline at end of file diff --git a/docs/apps/sealedsecrets.md b/docs/apps/sealedsecrets.md index 27df5c953..3ba83f5f2 100644 --- a/docs/apps/sealedsecrets.md +++ b/docs/apps/sealedsecrets.md @@ -14,7 +14,7 @@ You can use your certificates for the disaster recovery purpose. Please make sur While the controller generates its own certificates upon deployment, you also have the option to bring your own certificates. This allows the controller to consume certificates from a secret labeled with `sealedsecrets.bitnami.com/sealed-secrets-key=active`. The Secret should reside in the `sealed-secrets` namespace, which must be the same as the controller's namespace. You can have multiple secrets with this label. -To configure BYO certificates, add the following to the `values.yaml` when installing APL: +To configure BYO certificates, add the following to the `values.yaml`: ```yaml apps: diff --git a/docs/apps/tekton.md b/docs/apps/tekton.md index 01ccdb1b2..cbf1a637a 100644 --- a/docs/apps/tekton.md +++ b/docs/apps/tekton.md @@ -6,13 +6,13 @@ sidebar_label: Tekton ## About -Tekton is used in APL for the Builds self-service. When a Build is created, APL generates the Tekton Pipeline and Pipelinerun resources. There are 2 pipeline types: +Tekton is used for the Builds self-service feature. When a Build is created, the Tekton Pipeline and Pipelinerun resources are created automatically. There are 2 pipeline types: - `Docker` for building images based on a Dockerfile - `Buildpacks` for building images using buildpacks -When Tekton is activated, APL will add 4 Tekton tasks to the team's namespace: +When Tekton is activated, 4 Tekton tasks will be added to the Team's namespace: 1. [`buildpacks`](https://github.com/tektoncd/catalog/tree/main/task/buildpacks/0.6) @@ -25,13 +25,13 @@ When Tekton is activated, APL will add 4 Tekton tasks to the team's namespace: and use them in the Build pipelines. -When APL generates the manifest resources for the pipeline and the pipelinerun, the pipelinerun will automatically run the pipeline once. Use the following command to check if the status of the pipelinerun: +When the manifest for the pipeline and the pipelinerun are applied, the pipelinerun will automatically run the pipeline once. Use the following command to check if the status of the pipelinerun: ``` tkn pipelineruns logs -n team- ``` -If the build is changed in APL, the pipelinerun will not be re-started. Use the following command to start the pipeline after a change: +If the Build is changed the pipelinerun will not be re-started. Use the following command to start the pipeline after a change: ``` tkn pipeline start --use-pipelinerun -n team- diff --git a/docs/for-devs/console/builds.md b/docs/for-devs/console/builds.md index cfd6b657a..5c3868b60 100644 --- a/docs/for-devs/console/builds.md +++ b/docs/for-devs/console/builds.md @@ -6,7 +6,7 @@ sidebar_label: Builds -A Build in APL is a self-service feature for building OCI compliant images based on application source code and store the image in a private Team registry in Harbor. +A Build is a self-service feature for building OCI compliant images based on application source code and store the image in a private Team registry in Harbor. :::info Ask your platform administrator to activate the Harbor App to use this feature. @@ -75,7 +75,7 @@ To see the more status details of the build, click on the `PipelineRun` link of ### Configure a webhook for the Git repo in Gitea -1. In APL Console, click on `Apps` in the left menu and then open `Gitea` +1. In the Console, click on `Apps` in the left menu and then open `Gitea` 2. In the top menu of Gitea, click on `Explore` and then on the `green` repo @@ -89,7 +89,7 @@ To see the more status details of the build, click on the `PipelineRun` link of ### Expose the trigger listener publicly -When using an external (private) Git repository, the trigger event listener that is created by APL can also be exposed publicly. To expose the event listener publicly: +When using an external (private) Git repository, the trigger event listener that is created can also be exposed publicly. To expose the event listener publicly: 1. Go to Services diff --git a/docs/for-devs/console/catalog.md b/docs/for-devs/console/catalog.md index d1b1bc4c9..51a3f8d6d 100644 --- a/docs/for-devs/console/catalog.md +++ b/docs/for-devs/console/catalog.md @@ -4,41 +4,41 @@ title: Catalog sidebar_label: Catalog --- -The Catalog is a library of curated Helm charts to create Kubernetes resources. By default the Catalog contains a set of Helm charts provided by APL to get started quickly, but they can also be modified depending on your requirements or be removed from the Catalog. +The Catalog is a library of curated Helm charts to create Kubernetes resources. By default the Catalog contains a set of Helm charts provided to get started quickly, but they can also be modified depending on your requirements or be removed from the Catalog. The contents of the Catalog and the RBAC configuration (which Team can use which Helm chart) are managed by the platform administrator. Contact the platform administrator if you would like to add your own charts to use within your Team. -## About APL Catalog quick starts +## About the Catalog quick starts The Catalog contains a set of Helm charts that can be used as quick starts. The following quick starts are available: ### Kubernetes Deployment -The `apl-quickstart-k8s-deployment` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. +The `quickstart-k8s-deployment` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. ### Kubernetes Deployment with Open Telemetry Instrumentation -The `apl-quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount` and an `Instrumentation` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. +The `quickstart-k8s-deployment-otel` Helm chart can be used to create a Kubernetes `Deployment` (to deploy a single image), a `Service`, a `ServiceAccount` and an `Instrumentation` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` can be created. ### Kubernetes Canary Deployments -The `apl-quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created. +The `quickstart-k8s-deployments-canary` Helm chart can be used to create 2 Kubernetes `Deployments` (to deploy 2 versions of an image), a `Service` and a `ServiceAccount` resource. Optionally a `HorizontalPodAutoscaler`, a Prometheus `ServiceMonitor` and a `Configmap` (for each version) can be created. ### Knative-service -The `apl-quickstart-knative-service` Helm chart can be used to create a Knative `Service` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a Prometheus `ServiceMonitor` can be created. +The `quickstart-knative-service` Helm chart can be used to create a Knative `Service` (to deploy a single image), a `Service` and a `ServiceAccount`. Optionally a Prometheus `ServiceMonitor` can be created. ### PostgreSQL cluster -The `apl-quickstart-postgresql` Helm chart can be used to create a cloudnativepg PostgreSQL `Cluster`. Optionally a Prometheus `PodMonitor` and a `Configmap` (for adding a postgresql dashboard to Grafana) can be created. +The `quickstart-postgresql` Helm chart can be used to create a cloudnativepg PostgreSQL `Cluster`. Optionally a Prometheus `PodMonitor` and a `Configmap` (for adding a postgresql dashboard to Grafana) can be created. ### Redis master-replica cluster -The `apl-quickstart-redis` Helm chart can be used to create a Redis master-replica cluster. +The `quickstart-redis` Helm chart can be used to create a Redis master-replica cluster. ### RabbitMQ Cluster and/or Queues -The `apl-quickstart-rabbitmq` Helm chart can be used to create a `RabbitmqCluster`, `queues` and `Policy`s +The `quickstart-rabbitmq` Helm chart can be used to create a `RabbitmqCluster`, `queues` and `Policy`s ## Using the Catalog diff --git a/docs/for-devs/console/dashboard.md b/docs/for-devs/console/dashboard.md index 31bf9198b..a90f0d289 100644 --- a/docs/for-devs/console/dashboard.md +++ b/docs/for-devs/console/dashboard.md @@ -31,7 +31,7 @@ The dashboard has 5 elements ### Inventory -The inventory shows the APL resources within the team. Click on an inventory item to go directly to the full list. +The inventory shows the resources within the Team. Click on an inventory item to go directly to the full list. ### Resource Status diff --git a/docs/for-devs/console/deploy-changes.md b/docs/for-devs/console/deploy-changes.md index 1c4b5f0b2..391d878cc 100644 --- a/docs/for-devs/console/deploy-changes.md +++ b/docs/for-devs/console/deploy-changes.md @@ -4,4 +4,4 @@ title: Deploy changes sidebar_label: Deploy Changes --- -When a form (build, workload, service, backup, secret) is submitted, APL will prepare a commit in the APL Values (otomi-values) Git repository. When the commit is prepared, the `Deploy Changes` button in the top of the left menu will become active. To commit your changes, click on the Deploy Changes button. +When a self-service form (Build, Workload, Service, Backup, Secret) is submitted, a commit in the `otomi-values` Git repository will be prepared. When the commit is prepared, the `Deploy Changes` button in the top of the left menu will become active. To commit your changes, click on the Deploy Changes button. diff --git a/docs/for-devs/console/netpols.md b/docs/for-devs/console/netpols.md index bda2d3eec..c76ec5e3b 100644 --- a/docs/for-devs/console/netpols.md +++ b/docs/for-devs/console/netpols.md @@ -4,7 +4,7 @@ title: Team Network Policies sidebar_label: Network Policies --- -A Network Policy in APL is a self-service feature for creating Kubernetes Network Policies (Ingress) and Istio Service Entries (Egress). +A Network Policy is a self-service feature for creating Kubernetes Network Policies (Ingress) and Istio Service Entries (Egress). When the Network Policies `Ingress control` option is enabled for the team, all traffic to the Pods of the Team (from other Pods within the Team and from Pods in other Teams) will be blocked by default. To allow other Pods to access your Pod you will need to create a Network Policy of type `ingress`. diff --git a/docs/for-devs/console/overview.md b/docs/for-devs/console/overview.md index a5991ce44..6e9a16bd6 100644 --- a/docs/for-devs/console/overview.md +++ b/docs/for-devs/console/overview.md @@ -6,16 +6,16 @@ sidebar_label: Overview -## APL Console +## Console -APL Console is the web UI of APL and offers access to all integrated apps and self-service features. APL Console has a topbar showing a Team selector. The Team selector allows you to switch between different Teams (if you are a member of different Teams). +The Console is the web UI of the platform and offers access to all integrated apps and self-service features. The Console has a topbar showing a Team selector. The Team selector allows you to switch between different Teams (if you are a member of different Teams). -The Team view in APL Console gives access to 3 menu sections: +The Team view in the Console gives access to 3 menu sections: Deploy section: - [Deploy Changes](deploy-changes): Commit changes to the configuration repository -- Revert Changes: Revert your changes made in APL Console +- Revert Changes: Revert your changes made in the Console Self-service section: @@ -36,5 +36,5 @@ Download links section: - A "Download KUBECFG" link to download a KUBECONFIG file that gives access to the namespace of the team selected. Admins can download one with `cluster-admin` permissions (giving access to all namespaces) by setting the team selector to '-'. You can use it like `export KUBECONFIG=$file_location` or by merging it with another KUBECONFIG file like `.kube/config`. Please visit the official Kubernetes [documentation about managing kube contexts](https://kubernetes.io/docs/concepts/configuration/organize-cluster-access-kubeconfig/). - When Harbor is enabled, a link to download the Dockercfg file. -- When APL uses an automatic generated CA or Let's Encrypt staging certificates, a "Download CA" link is provided. +- When an automatic generated CA or Let's Encrypt staging certificates are used, a "Download CA" link is provided. diff --git a/docs/for-devs/console/projects.md b/docs/for-devs/console/projects.md index 7268c56f7..91d711dc8 100644 --- a/docs/for-devs/console/projects.md +++ b/docs/for-devs/console/projects.md @@ -4,14 +4,14 @@ title: Team Projects sidebar_label: Projects --- -A Project in APL is a collection of a Build, a Workload and a Service in ONE form. +A Project is a collection of a Build, a Workload and a Service in ONE form. ## Create a Project 1. In the left menu click on `Projects` and then on `Create project`. 2. Provide a name for the project. -Note: The name of the project will be used for all created APL resources (build, workload and service). +Note: The name of the project will be used for all created resources (build, workload and service). 3. Select `Create build form source` or `Use an existing image` 4. If `Create build from source` is selected: follow the [instruction](builds.md) for creating a Build diff --git a/docs/for-devs/console/sealed-secrets.md b/docs/for-devs/console/sealed-secrets.md index 36bc4a02a..55c478ec1 100644 --- a/docs/for-devs/console/sealed-secrets.md +++ b/docs/for-devs/console/sealed-secrets.md @@ -4,9 +4,9 @@ title: Team Secrets sidebar_label: Sealed Secrets --- -Sealed Secrets are encrypted Kubernetes Secrets. The encrypted secrets are stored in the APL Values Git repository. When a Sealed Secrets secret is created in APL Console, the Kubernetes Secret will appear in the Team's namespace and can be used as you would use any secret that you would have created directly. +Sealed Secrets are encrypted Kubernetes Secrets. The encrypted secrets are stored in the Values Git repository. When a Sealed Secrets secret is created in the Console, the Kubernetes Secret will appear in the Team's namespace and can be used as you would use any secret that you would have created directly. -APL Console supports 7 types of secrets: +7 types of secrets are supported: - Opaque - Service Account Token diff --git a/docs/for-devs/console/services.md b/docs/for-devs/console/services.md index 439522202..c0c42f2ab 100644 --- a/docs/for-devs/console/services.md +++ b/docs/for-devs/console/services.md @@ -6,9 +6,9 @@ sidebar_label: Services -A service in APL is a self-service feature for: +A Service is a self-service feature for: -- Publicly exposing ClusterIP services. APL will automatically create and configure all ingress resources needed, including Istio Virtual Services and Gateways, certificates, DNS records and the Oauth2 proxy for Single Sign On. +- Publicly exposing ClusterIP services. All ingress resources needed, including Istio Virtual Services and Gateways, certificates, DNS records and the Oauth2 proxy for Single Sign will be automatically created and configured. - Configuring Traffic Control to split traffic between 2 deployments using the same service. @@ -91,7 +91,7 @@ Follow the steps below to set up a CNAME when the TLS termination happens on the 2. Generate or copy your domain certificates and store them as a TLS secret in your team's namespace. -3. Go to the service configuration section in the APL Console. +3. Go to the service configuration section in the Console. 4. Create a new service by selecting the k8s service and port that you want to expose. @@ -107,7 +107,7 @@ Follow the steps below to set up a CNAME when the TLS termination happens on the 1. Configure a CNAME entry with your domain name provider. -2. Go to the service configuration section in the APL Console. +2. Go to the service configuration section in the Console. 3. Create a new service by selecting the k8s service and port that you want to expose. diff --git a/docs/for-devs/console/settings.md b/docs/for-devs/console/settings.md index ff6a8d289..91ef16bd4 100644 --- a/docs/for-devs/console/settings.md +++ b/docs/for-devs/console/settings.md @@ -9,7 +9,7 @@ Based on self-service options allowed by the platfrom administrator, team member ## Configure OIDC group mapping :::note -The OIDC group mapping will only be visible when APL is configured with an external Identity Provider (IdP). +The OIDC group mapping will only be visible when an external Identity Provider (IdP) is used. ::: Change the OIDC group-mapping to allow access based on a group membership. @@ -105,6 +105,6 @@ The self-service flags (what is a team allowed to) can only be configured by an | Shell | The team is allowed to use the cloud Shell | | Download kube config | The team is allowed to download the Kube Config | | Download docker config | The team is allowed to download the Docker Config | -| Download certificate authority | The team is allowed to download the certificate authority (only when APL is installed with a auto-generated or custom CA) | +| Download certificate authority | The team is allowed to download the certificate authority (only when installed with a auto-generated or custom CA) | diff --git a/docs/for-devs/console/shell.md b/docs/for-devs/console/shell.md index bb41dce39..fb8ba8432 100644 --- a/docs/for-devs/console/shell.md +++ b/docs/for-devs/console/shell.md @@ -22,7 +22,7 @@ The Shell provides an easy and efficient way to access and manage Kubernetes res ## Using the Shell -1. Log in into the APL Console. +1. Log in into the Console. 2. Click on the "Shell" option in the left menu. diff --git a/docs/for-devs/console/workloads.md b/docs/for-devs/console/workloads.md index 09c839a54..abe214088 100644 --- a/docs/for-devs/console/workloads.md +++ b/docs/for-devs/console/workloads.md @@ -6,7 +6,7 @@ sidebar_label: Workloads -A Workload in APL is a self-service feature for creating Kubernetes resources using Helm charts from the Catalog. +A Workload is a self-service feature for creating Kubernetes resources using Helm charts from the Catalog. ## Workloads (all) @@ -57,6 +57,6 @@ image: Now click on `Deploy Changes` -After a few minutes, APL will have created all the needed Argo CD resources (one `applicationSet` per Workload) to deploy your workload. In the workloads list, click on the `Application` link of your workload to see the status of your workload. +After a few minutes, all the needed Argo CD resources (one `applicationSet` per Workload) to deploy your workload will be created. In the workloads list, click on the `Application` link of your workload to see the status of your workload. The values of a workload can be changed at any time. Changes will automatically be deployed. diff --git a/docs/for-ops/cli/check-policies.md b/docs/for-ops/cli/check-policies.md deleted file mode 100644 index 4d0f268d6..000000000 --- a/docs/for-ops/cli/check-policies.md +++ /dev/null @@ -1,23 +0,0 @@ ---- -slug: check-policies -title: check-policies -sidebar_label: otomi check-policies ---- - -`otomi check-policies [options]` - -## Description - -Check if generated manifests adhere to defined OPA policies. - -## Options - -| Option | Description | Value Type | Default | -| --- | --- | --- | --- | -| `-l`, `--label` | Select charts by label (format: `