From 4be6516fad93a0de12640e52e197c9fc5caa7486 Mon Sep 17 00:00:00 2001 From: Himadri Bisht Date: Thu, 12 Sep 2024 15:42:10 +0530 Subject: [PATCH] Added overview of validated patterns and removed the old content --- content/learn/about-validated-patterns.adoc | 101 ++++++++++++++++++++ content/learn/about.adoc | 61 ------------ 2 files changed, 101 insertions(+), 61 deletions(-) create mode 100644 content/learn/about-validated-patterns.adoc delete mode 100644 content/learn/about.adoc diff --git a/content/learn/about-validated-patterns.adoc b/content/learn/about-validated-patterns.adoc new file mode 100644 index 000000000..4af773507 --- /dev/null +++ b/content/learn/about-validated-patterns.adoc @@ -0,0 +1,101 @@ +--- +menu: learn +title: About Validated Patterns +weight: 10 +aliases: /learn/about-validated-patterns/ +--- +:toc: + +:_content-type: ASSEMBLY +include::modules/comm-attributes.adoc[] + +[id="about-validated-patterns"] += Overview of Validated Patterns + +Validated Patterns are an advanced form of reference architectures that provide a streamlined approach to deploying complex business solutions. Unlike traditional architectures, which provide only guidance, Validated Patterns are deployable, testable software artifacts with automated deployment, enhancing speed, reliability, and consistency across environments. These patterns are rigorously tested blueprints designed to meet specific business needs, reducing deployment risks. + +Building on traditional architectures, Validated Patterns focus on customer solutions involving multiple Red Hat products. They are based on successful deployments and include example applications and the necessary open-source projects. Users can easily modify these patterns to fit their specific needs. + +The creation process involves selecting customer use cases, validating the patterns with engineering teams, and developing GitOps-based automation. This automation supports Continuous Integration (CI) pipelines, allowing for proactive updates and maintenance as new product versions are released. + + + +[id="relationship-to-reference-architectures"] +== Relationship to reference architectures + +Validated Patterns enhance reference architectures with automation and rigorous validation. While reference architectures provide a conceptual framework for building solutions, Validated Patterns go further by providing a deployable software artifact that automates and optimizes the reference architecture framework, ensuring consistent and efficient deployments. This approach allows businesses to implement complex solutions rapidly and with confidence, knowing that the patterns have been thoroughly tested and optimized for their use case. + +[id="problem-validated-pattern-solve"] +== The problem Validated Patterns solve +Deploying complex business solutions involves multiple steps, each of which, if done haphazardly, can introduce potential errors or inefficiencies. Validated Patterns address this by offering a pre-validated, automated deployment process. This reduces guesswork, minimizes manual intervention, and ensures faster, more reliable deployments. Organizations can then focus on strategic business objectives rather than deployment complexities. + +[id="goals"] +== Validated Patterns goals +The main goals of the Validated Patterns project include: + +* *Consistency*: Ensure that deployments are uniform across different environments, reducing variability and potential issues. +* *Reliability*: Ensure that solutions are thoroughly tested and validated, minimizing the risk of errors during deployment. +* *Efficiency*: Reduce the time and resources required for deployment by providing pre-configured, automated solutions. +* *Scalability*: Enable businesses to deploy solutions that can scale to meet growing demands. +* *Automation*: Streamline the deployment of complex business solutions by automating key processes. While automation is a key aspect of the framework, its primary role is to support and enhance the core goals of consistency, reliability, efficiency, and scalability. + + +[id="who-should-use-validated-patterns"] +== Who should use Validated Patterns? +Validated Patterns are particularly suited for IT architects, advanced developers, and system administrators with a familiarity with Kubernetes and the Red Hat OpenShift Container Platform. These patterns are ideal for those who need to deploy complex business solutions quickly and reliably across various environments. The framework incorporates advanced https://www.cncf.io/projects/[Cloud Native] concepts and projects, such as OpenShift GitOps (https://argoproj.github.io/argo-cd/[ArgoCD]), Advanced Cluster Management (https://open-cluster-management.io/[Open Cluster Management]), and OpenShift Pipelines (https://tekton.dev/[Tekton]), making them especially beneficial for users familiar with these tools. + +*Examples of Use Cases*: + +* *Enterprise-Level Deployments*: Organizations implementing large-scale, multi-tier applications can use Validated Patterns to ensure reliable and consistent deployments across all environments. +* *Cloud Migration*: Companies transitioning their infrastructure to the cloud can leverage Validated Patterns to automate and streamline the migration process. +* *DevOps Pipelines*: Teams relying on continuous integration and continuous deployment (CI/CD) pipelines can use Validated Patterns to automate the deployment of new features and updates, ensuring consistent and repeatable outcomes. + + +[id="validated-patterns-community-and-ecosystem"] +== The Validated Patterns community and ecosystem +Validated Patterns are supported by a vibrant community and ecosystem that contribute to their ongoing development and refinement. This community-driven approach ensures that Validated Patterns stay current with the latest technological advancements and industry best practices. The ecosystem includes contributions from various industries and technology partners, ensuring that the patterns are applicable to a wide range of use cases and environments. This collaborative effort keeps the patterns relevant and fosters a culture of continuous improvement within the Red Hat ecosystem. + + + +[id="involvement-in-validated-patterns"] +== Red Hat's involvement in Validated Patterns + +Red Hat plays a pivotal role in the development, validation, and promotion of Validated Patterns. As a leader in open-source solutions, Red Hat leverages its extensive expertise to create and maintain these patterns, ensuring they meet the highest standards of quality and reliability. Red Hat’s involvement extends beyond tool provision; it includes continuous updates to align these patterns with the latest technological advancements and industry needs. This ensures that organizations using Validated Patterns are always equipped with the most effective and up-to-date solutions available. Additionally, Red Hat collaborates closely with the community to expand the catalog of Validated Patterns, making these valuable resources accessible to organizations worldwide. + +[id="deployment-workflows"] +== Application deployment workflows +Effective deployment workflows are crucial for ensuring that applications are deployed consistently and efficiently across various environments. By leveraging OpenShift clusters and automation tools, these workflows can streamline the process, reduce errors, and ensure scalability. Below, we outline the general structure for deploying applications, including edge patterns and GitOps integration. + +[id="general-structure"] +=== General structure + +All patterns assume an available OpenShift cluster for deploying applications. If no OpenShift cluster is available, you can use https://console.redhat.com/openshift[cloud.redhat.com]. The documentation uses the `oc` command syntax, but `kubectl` can be used interchangeably. For each deployment, it is assumed that the user is logged into a cluster using the `oc login` command or by exporting the `KUBECONFIG` path. + + +The diagram below outlines the general deployment flow for a data center application. Before proceeding, users must create a fork of the pattern repository to allow for changes to operational elements (such as configurations) and application code. These changes can then be successfully pushed to the forked repository as part of DevOps continuous integration (CI). Clone the repository to your local machine, and push future changes to your fork. + + +image::/images/gitops-datacenter.png[GitOps for Datacenter] + +. Make a copy of the values file, for example, `values-global.yaml` or `values-hub.yaml`. These values files specify subscriptions, operators, applications, and other details. Additionally, each Validated Pattern contains a `values-secret` template file, which provides secret values required to successfully install the pattern. Patterns do not require committing secret material to git repositories. It is important to *avoid pushing sensitive information to a public repository accessible to others*. The Validated Patterns framework includes components to facilitate the safe use of secrets. + + +. Deploy the application as specified by the pattern, usually by using a `make` command (`make install`). When the workload is deployed, the pattern first deploys the Validated Patterns operator, which in turn installs OpenShift GitOps. OpenShift GitOps then ensures that all components of the pattern, including required operators and application code, are deployed. + +Most patterns also include the deployment of an Advanced Cluster Management (ACM) operator to manage multi-cluster deployments. + + +[id="edge-patterns"] +=== Edge Patterns + +Many patterns include both a data center and one or more edge clusters. The diagram below outlines the general deployment flow for applications on an edge cluster. Edge OpenShift clusters are typically smaller than data center clusters and might be deployed on a three-node cluster that allows workloads on master nodes, or even on a single-node cluster (SNO). These edge clusters can be deployed on bare metal, local virtual machines, or in a public or private cloud. + +image::/images/gitops-edge.png[GitOps for Edge] + +[id="gitops-edge"] +=== GitOps for edge) + +After provisioning the edge cluster, import or join it with the hub or data center cluster. For more details on Instructions for importing the cluster see, link:https://validatedpatterns.io/learn/importing-a-cluster/[Importing a cluster]. + +After importing the cluster, ACM (Advanced Cluster Management) on the data center deploys an ACM agent and agent-addon pod into the edge cluster. ACM then installs OpenShift GitOps, which deploys the required applications based on the specified criteria. + diff --git a/content/learn/about.adoc b/content/learn/about.adoc deleted file mode 100644 index c8cbabd8c..000000000 --- a/content/learn/about.adoc +++ /dev/null @@ -1,61 +0,0 @@ ---- -menu: learn -title: About Validated Patterns -weight: 10 ---- -:toc: - -:_content-type: ASSEMBLY -include::modules/comm-attributes.adoc[] - -Validated Patterns and upstream Community Patterns are a natural progression from reference architectures with additional value. Here is a brief video to explain what patterns are all about: - -image::https://img.youtube.com/vi/lI8TurakeG4/0.jpg[patterns-intro-video,link=https://www.youtube.com/watch?v=lI8TurakeG4] - -This effort is focused on customer solutions that involve multiple Red Hat -products. The patterns include one or more applications that are based on successfully deployed customer examples. Example application code is provided as a demonstration, along with the various open source projects and Red Hat products required to for the deployment to work. Users can then modify the pattern for their own specific application. - -How do we select and produce a pattern? We look for novel customer use cases, obtain an open source demonstration of the use case, validate the pattern with its components with the relevant product engineering teams, and create GitOps based automation to make them easily repeatable and extendable. - -The automation also enables the solution to be added to Continuous Integration (CI), with triggers for new product versions (including betas), so that we can proactively find and fix breakage and avoid bit-rot. - -[id="who-should-use-these-patterns"] -== Who should use these patterns? - -It is recommended that architects or advanced developers with knowledge of Kubernetes and Red Hat OpenShift Container Platform use these patterns. There are advanced https://www.cncf.io/projects/[Cloud Native] concepts and projects deployed as part of the pattern framework. These include, but are not limited to, OpenShift Gitops (https://argoproj.github.io/argo-cd/[ArgoCD]), Advanced Cluster Management (https://open-cluster-management.io/[Open Cluster Management]), and OpenShift Pipelines (https://tekton.dev/[Tekton]) - -[id="general-structure"] -== General Structure - -All patterns assume an OpenShift cluster is available to deploy the application(s) that are part of the pattern. If you do not have an OpenShift cluster, you can use https://console.redhat.com/openshift[cloud.redhat.com]. - -The documentation will use the `oc` command syntax but `kubectl` can be used interchangeably. For each deployment it is assumed that the user is logged into a cluster using the `oc login` command or by exporting the `KUBECONFIG` path. - -The diagram below outlines the general deployment flow of a datacenter application. - -But first the user must create a fork of the pattern repository. This allows changes to be made to operational elements (configurations etc.) and to application code that can then be successfully made to the forked repository for DevOps continuous integration (CI). Clone the directory to your laptop/desktop. Future changes can be pushed to your fork. - -image::/images/gitops-datacenter.png[GitOps for Datacenter] - -. Make a copy of the values file. There may be one or more values files. E.g. `values-global.yaml` and/or `values-datacenter.yaml`. While most of these values allow you to specify subscriptions, operators, applications and other application specifics, there are also _secrets_ which may include encrypted keys or user IDs and passwords. It is important that you make a copy and *do not push your personal values file to a repository accessible to others!* -. Deploy the application as specified by the pattern. This may include a Helm command (`helm install`) or a make command (`make deploy`). - -When the workload is deployed the pattern first deploys OpenShift GitOps. OpenShift GitOps will then take over and make sure that all application and the components of the pattern are deployed. This includes required operators and application code. - -Most patterns will have an Advanced Cluster Management operator deployed so that multi-cluster deployments can be managed. - -[id="edge-patterns"] -== Edge Patterns - -Some patterns include both a data center and one or more edge clusters. The diagram below outlines the general deployment flow of applications on an edge application. The edge OpenShift cluster is often deployed on a smaller cluster than the datacenter. Sometimes this might be a three node cluster that allows workloads to be deployed on the master nodes. The edge cluster might be a single node cluster (SN0). It might be deployed on bare metal, on local virtual machines or in a public/private cloud. Provision the cluster (see above) - -image::/images/gitops-edge.png[GitOps for Edge] - -. Import/join the cluster to the hub/data center. Instructions for importing the cluster can be found [here]. You're done. - -When the cluster is imported, ACM on the datacenter will deploy an ACM agent and agent-addon pod into the edge cluster. Once installed and running ACM will then deploy OpenShift GitOps onto the cluster. Then OpenShift GitOps will deploy whatever applications are required for that cluster based on a label. - -[id="openshift-gitops-argocd"] -== OpenShift GitOps (a.k.a ArgoCD) - -When OpenShift GitOps is deployed and running in a cluster (datacenter or edge) you can launch its console by choosing ArgoCD in the upper left part of the OpenShift Console (TO-DO whenry to add an image and clearer instructions here)