forked from pivotal-cf/docs-pcf-refarch
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.html.md.erb
211 lines (125 loc) · 14.5 KB
/
index.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
---
breadcrumb: <%= vars.platform_name %> Documentation
title: Platform Architecture and Planning Overview
owner: Customer0
---
This topic describes reference architectures and other plans for installing <%= vars.platform_name %> on any infrastructure to support the <%= vars.app_runtime_full %> (<%= vars.app_runtime_abbr %>) and <%= vars.k8s_runtime_full %> (<%= vars.k8s_runtime_abbr %>) runtime environments.
## <a id="overview"></a> Overview
A <%= vars.platform_name %> reference architecture describes a proven approach for deploying <%= vars.platform_name %> on a specific IaaS, such as AWS, Azure, GCP, OpenStack, or vSphere. <%= vars.platform_name %> reference architectures meet these requirements:
* Secure
* Publicly accessible
* Includes common <%= vars.platform_name %>-managed services such as MySQL, RabbitMQ, and Spring Cloud Services
* Can host at least 100 app instances
* Have been deployed and validated by Pivotal to support <%= vars.ops_manager_full %>, <%= vars.app_runtime_abbr %>, and <%= vars.k8s_runtime_abbr %>
You can use <%= vars.platform_name %> reference architectures to help plan the best configuration for your <%= vars.platform_name %> deployment on your IaaS.
## <a id="topics"></a> Reference Architecture and Planning Topics
All <%= vars.platform_name %> reference architectures start with the base <%= vars.app_runtime_abbr %> architecture and base <%= vars.k8s_runtime_abbr %> architecture.
These IaaS-specific topics build on these two common base architectures:
* [AWS Reference Architecture](./aws/aws_ref_arch.html)
* [Azure Reference Architecture](./azure/azure_ref_arch.html)
* [GCP Reference Architecture](./gcp/gcp_ref_arch.html)
* [OpenStack Reference Architecture](./openstack/openstack_ref_arch.html)
* [vSphere Reference Architecture](./vsphere/vsphere_ref_arch.html)
[Control Plane Reference Architectures](control.html) describes a broader architecture that automates the installation and updating of multiple <%= vars.platform_name %> foundations, or multiple instances of each architecture above.
These topics address aspects of platform architecture and planning that the <%= vars.platform_name %> reference architectures do not cover:
* [Implementing a Multi-Foundation <%= vars.k8s_runtime_abbr %> Deployment](https://docs.pivotal.io/pks/nsxt-multi-pks.html)
* [Using Global DNS Load Balancers for Multi-Foundation](global-dns-lb.html)
## <a id="pas-base"></a> <%= vars.app_runtime_abbr %> Architecture
The diagram below illustrates a base architecture for <%= vars.app_runtime_abbr %> and how its network topology places and replicates <%= vars.platform_name %> and <%= vars.app_runtime_abbr %> components across subnets and Availability Zones (AZs).
<%# Image from PAS_Base canvas of /images/v2/PCF_Reference_Architecture_2019.graffle %>
![<%= vars.app_runtime_abbr %> Base Deployment Topology](./images/v2/export/PAS_Base.png)
[View a larger version of this diagram](./images/v2/export/PAS_Base.png)
### <a id="pas-components"></a> Internal Components
The table below describes the internal component placements shown in the diagram above:
| Component | Placement and Access Notes |
| ------------- | ------------------------------ |
| <%= vars.ops_manager %> | Deployed on one of the three public subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox. |
| BOSH Director | Deployed on the infrastructure subnet. |
| Jumpbox | Optional. Deployed on the infrastructure subnet for accessing <%= vars.app_runtime_abbr %> management components such as <%= vars.ops_manager %> and the Cloud Foundry Command Line Interface (cf CLI). |
| Gorouters (HTTP routers in <%= vars.platform_name %>) | Deployed on all three <%= vars.app_runtime_abbr %> subnets, one per AZ. Accessed through the HTTP, HTTPS, and SSL load balancers. |
| Diego Brain | Deployed on all three <%= vars.app_runtime_abbr %> subnets, one per AZ. The Diego Brain component is required, but SSH container access support through the Diego Brain is optional, and enabled through the SSH load balancers. |
| TCP routers | Optional. Deployed on all three P<%= vars.app_runtime_abbr %> subnets, one per AZ, to support TCP routing. |
| Service tiles | Service brokers and shared services instances are deployed to the Services subnet. Dedicated on-demand service instances are deployed to an on-demand services subnet. |
| Isolation segments | Deployed on an isolation segment subnet. Includes Diego Cells and Gorouters for running and accessing apps hosted within isolation segments. |
### <a id="pas-network"></a> Networks
These sections describe <%= vars.app_runtime_abbr %>'s recommendations for defining your networks and load-balancing their incoming requests:
#### <a id="pas-subnets"></a> Required Subnets
<%= vars.app_runtime_abbr %> requires these statically-defined networks to host its main component systems:
* Infrastructure subnet - `/24` segment<br>This subnet contains VMs that require access only for Platform Administrators, such as <%= vars.ops_manager %>, the BOSH Director, and an optional jumpbox.
* <%= vars.app_runtime_abbr %> subnet - `/24` segment<br>This subnet contains <%= vars.app_runtime_abbr %> runtime VMs, such as Gorouters, Diego Cells, and Cloud Controllers.
* Services subnet - `/24` segment<br>The services and on-demand services networks support <%= vars.ops_manager %> tiles that you might add in addition to <%= vars.app_runtime_abbr %>. In other words, they are the networks for everything that is not <%= vars.app_runtime_abbr %>. Some services tiles can call for additional network capacity to grow into on-demand. If you use services with this capability, <%= vars.recommended_by %> recommends that you add an on-demand services network for each on-demand service.
* On-demand services subnets - `/24` segments<br>This is for services that can allocate network capacity on-demand from BOSH for their worker VMs. <%= vars.recommended_by %> recommends allocating a dedicated subnet to each on-demand service. For example, you can configure the Redis tile as follows:
* **Network:** Enter the existing `Services` network, to host the service broker.
* **Services network:** Deploy a new network `OD-Services1`, to host the Redis worker VMs.
<br>
Another on-demand service tile can then also use `Services` for its broker and a new `OD-Services2` network for its workers, and so on.
* Isolation segment subnets - `/24` segments<br>You can add one or more isolation segment tiles to a <%= vars.app_runtime_abbr %> installation to compartmentalize hosting and routing resources. For each isolation segment you deploy, you should designate a `/24` network for its range of address space.
#### <a id="pas-lb"></a> Load Balancing
Any <%= vars.app_runtime_abbr %> installation needs a suitable load balancer to send incoming HTTP, HTTPS, SSH, and SSL traffic to its Gorouters and app containers. All installations approaching production-level use rely on external load balancing from hardware appliance vendors or other network-layer solutions.
The load balancer can also perform Layer 4 or Layer 7 load balancing functions. SSL can be terminated at the load balancer or used as a pass-through to the Gorouter.
Common deployments of load balancing in <%= vars.app_runtime_abbr %> are:
* HTTP/HTTPS traffic to and from Gorouters
* TCP traffic to and from TCP routers
* Traffic from the Diego Brain, when developers access app containers through SSH
To load-balance across multiple <%= vars.app_runtime_abbr %> foundations, use an IaaS- or vendor-specific Global Traffic Manager or Global DNS load balancer.
For more information, see [Global DNS Load Balancers for Multi-Foundation Environments](global-dns-lb.html).
### <a id="pas-ha"></a> High Availability
<%= vars.app_runtime_abbr %> is not considered high availability (HA) until it runs across at least two AZs. <%= vars.recommended_by %> recommends defining three AZs.
On IaaSes with their own HA capabilities, using the IaaS HA in conjunction with a <%= vars.platform_name %> HA topology provides the best of both worlds. Multiple AZs give <%= vars.platform_name %> redundancy, so that losing an AZ is not catastrophic. The BOSH Resurrector can then replace lost VMs as needed to repair a foundation.
To back up and restore a foundation externally, use BOSH Backup and Restore (BBR). For more information, see the [BOSH Backup and Restore](https://docs.cloudfoundry.org/bbr/) documentation.
### <a id="pas-storage"></a> Storage
<%= vars.app_runtime_abbr %> requires disk storage for each component, for both persistent data and to allocate to ephemeral data. You size these disks in the **Resource Config** pane of the <%= vars.app_runtime_abbr %> tile. For more information about storage configuration and capacity planning, see the corresponding section in the [reference architecture for your IaaS](#topics).
The platform also requires you to configure file storage for large shared objects. These blobstores can be external or internal. For details, see [Configuring File Storage for <%= vars.app_runtime_abbr %>](../customizing/pas-file-storage.html).
### <a id="pas-security"></a> Security
For information about how <%= vars.app_runtime_abbr %> implements security, see:
* [<%= vars.platform_name %> Infrastructure Security
](../security/pcf-infrastructure/index.html)
* [Security Concepts](../security/concepts/)
* [Certificates and TLS in <%= vars.platform_name %>](../security/pcf-infrastructure/certificates-index.html)
* [Network Communication Paths in <%= vars.platform_name %>](../security/networking/#net_commpaths) in _Network Security_
### <a id="pas-domains"></a> Domain Names
<%= vars.app_runtime_abbr %> requires these domain names to be registered:
* System domain, for <%= vars.app_runtime_abbr %> and other tiles: `sys.domain.name`
* App domain, for your apps: `app.domain.name`
You must also define these wildcard domain names and include them when creating certificates that access <%= vars.app_runtime_abbr %> and its hosted apps:
* \*.SYSTEM-DOMAIN
* \*.APPS-DOMAIN
* \*.login.SYSTEM-DOMAIN
* \*.uaa.SYSTEM-DOMAIN
### <a id="pas-scaling"></a> Component Scaling
For recommendations on scaling <%= vars.app_runtime_abbr %> for different deployment scenarios, see [Scaling <%= vars.app_runtime_abbr %> ](../opsguide/scaling-ert-components.html).
## <a id="pks-base"></a> <%= vars.k8s_runtime_abbr %> Architecture
The diagram below illustrates a base architecture for <%= vars.k8s_runtime_abbr %> and how its network topology places and replicates <%= vars.platform_name %> and <%= vars.k8s_runtime_abbr %> components across subnets and AZs:
<%# Image from PKS_Base canvas of /images/v2/PCF_Reference_Architecture_2019.graffle %>
![<%= vars.k8s_runtime_abbr %> Base Deployment Topology](./images/v2/export/PKS_Base.png)
[View a larger version of this diagram](./images/v2/export/PKS_Base.png)
### <a id="pks-components"></a> Internal Components
The table below describes the internal component placements shown in the diagram above:
| Component | Placement and Access Notes |
| --------- | -------------------------- |
| <%= vars.ops_manager %> | Deployed on one of the subnets. Accessible by fully-qualified domain name (FQDN) or through an optional jumpbox. |
| BOSH Director | Deployed on the infrastructure subnet. |
| Jumpbox | Optional. Deployed on the infrastructure subnet for accessing <%= vars.k8s_runtime_abbr %> management components such as <%= vars.ops_manager %> and the `kubectl` command line interface. |
| <%= vars.k8s_runtime_abbr %> API | Deployed as a service broker VM on the <%= vars.k8s_runtime_abbr %> services subnet. Handles <%= vars.k8s_runtime_abbr %> API and service adapter requests, and manages <%= vars.k8s_runtime_abbr %> clusters. For more information, see [Enterprise PKS Components](https://docs.pivotal.io/pks/index.html#components) in _Enterprise <%= vars.k8s_runtime_full %> (Enterprise <%= vars.k8s_runtime_abbr %>)_. |
| Harbor tile | Optional container images registry, typically deployed to the services subnet. |
| <%= vars.k8s_runtime_abbr %> Cluster** | Deployed to a dynamically-created, dedicated <%= vars.k8s_runtime_abbr %> cluster subnet. Each cluster consists of worker nodes that run the workloads, or apps, and one or more master nodes. |
### <a id="pks-network"></a> Networks
These sections describe <%= vars.recommended_by %>'s recommendations for defining your networks and load-balancing their incoming requests.
#### <a id="pks-subnets"></a> Subnets Requirements
<%= vars.k8s_runtime_abbr %> requires two defined networks to host the main elements that compose it:
* Infrastructure subnet - `/24`<br>This subnet contains VMs that require access only for Platform Administrators, such as <%= vars.ops_manager %>, BOSH Director, and an optional jumpbox.
* <%= vars.k8s_runtime_abbr %> services subnet - `/24`<br>This subnet hosts <%= vars.k8s_runtime_abbr %> API VM and other optional service tiles such as Harbor.
* <%= vars.k8s_runtime_abbr %> cluster subnets - each one a `/24` from a pool of pre-allocated IPs<br>These subnets host <%= vars.k8s_runtime_abbr %> clusters.
#### <a id="pks-lb"></a> Load Balancing
Load balancers can be used to manage traffic across master nodes of a <%= vars.k8s_runtime_abbr %> cluster or for deployed workloads. For more information on how to configure load balancers for <%= vars.k8s_runtime_abbr %>, see the corresponding section in the [reference architecture for your IaaS](#topics).
### <a id="pks-ha"></a> High Availability
<%= vars.k8s_runtime_abbr %> has no inherent HA capabilities to design for. Make the best efforts to have HA design at the IaaS, storage, power and access layers to support <%= vars.k8s_runtime_abbr %>.
### <a id="pks-storage"></a> Storage
<%= vars.k8s_runtime_abbr %> requires shared storage across all AZs for the deployed workloads to appropriately allocate their required storage.
### <a id="pks-security"></a> Security
For information about how <%= vars.k8s_runtime_abbr %> implements security, see [Enterprise <%= vars.k8s_runtime_abbr %> Security](https://docs.pivotal.io/pks/security.html) and [Firewall Ports](https://docs.pivotal.io/pks/ports-protocols-nsx-t.html).
### <a id="pks-domains"></a> Domain Names
<%= vars.k8s_runtime_abbr %> requires the `*.pks.domain.name` domain name to be registered when creating a wildcard certificate and <%= vars.k8s_runtime_abbr %> tile configurations.
The wildcard certificate covers both the <%= vars.k8s_runtime_abbr %> API domain, such as `api.pks.domain.name`, and the <%= vars.k8s_runtime_abbr %> clusters domains, such as `cluster.pks.domain.name`.
### <a id="pks-management"></a> Cluster Management
For information about managing <%= vars.k8s_runtime_abbr %> clusters, see [Managing Clusters](https://docs.pivotal.io/pks/managing-clusters.html).