PadoGrid | Catalogs | Manual | FAQ | Releases | Templates | Pods | Kubernetes | Docker | Apps | Quick Start
This bundle demonstrates the Hazelcast WAN topology by replicating data between two (2) Hazlecast Helm Chart clusters running on OpenShift.
https://github.com/hazelcast/charts
install_bundle -download bundle-hazelcast-3n4n5-k8s-oc_helm_wan
This bundle installs PadoGrid and Hazelcast containers in two separate projects with WAN replication enabled. As shown in the diagram below, PadoGrid is used to ingest data into the first cluster named wan1 which in turn publishes data to the second cluster named wan2. It includes scripts for starting and stopping containers per project.
- OpenShift Client, oc
- Helm, helm
❗ If you are using OpenShift 3.x (3.11+ in particular), then this bundle depends on redhat/openshift-ovs-networkpolicy
, to create NetworkPolicy objects for enabling communications between projects. The bin_sh/init_netpol
script is provided to allow project-to-project network connections. Please see Section 5, [1] and [2] for details.
k8s/oc_helm_wan/
├── bin_sh
│ ├── build_app
│ ├── cleanup
│ ├── init_cluster
│ ├── init_netpol
│ ├── init_wan1
│ ├── login_padogrid_pod
│ ├── setenv.sh
│ ├── show_hazelcast_ips
│ ├── start_hazelcast
│ ├── start_padogrid
│ ├── stop_hazelcast
│ └── stop_padogrid
├── cluster
│ ├── hazelcast-rbac.yaml
│ ├── hazelcastcluster.crd.yaml
│ ├── operator-rbac.yaml
│ ├── pv-hostPath.yaml
│ ├── wan1-netpol.yaml
│ └── wan2-netpol.yaml
├── padogrid
│ ├── padogrid-no-pvc.yaml
│ ├── padogrid.yaml
│ └── pv-hostPath.yaml
└── templates
├── cluster
├── common
├── wan1
└── wan2
Place your Hazelcast Enterprise license key in the RWE environment file as follows.
cd_rwe
vi .hazelcastenv.sh
Set IMDG_LICENSE_KEY and MC_LICENSE_KEY in .hazelcastenv.sh
:
IMDG_LICENSE_KEY=<your license key>
MC_LICENSE_KEY=<your license key>
This bundle requires two (2) OpenShift projects. It is preconfigured with the project names, wan1 and wan2. You can change the project names in the setenv.sh
file as follows.
cd_k8s oc_helm_wan/bin_sh
vi setenv.sh
Enter your project names in setenv.sh
.
...
export PROJECT_WAN1="wan1"
export PROJECT_WAN2="wan2"
...
Source in the setenv.sh
file.
cd_k8s oc_helm_wan/bin_sh
. ./setenv.sh
Create OpenShift projects.
oc new-project $PROJECT_WAN1
oc new-project $PROJECT_WAN2
Run build_app
to intialize your local environment.
cd_k8s oc_helm_wan/bin_sh
./build_app
The build_app
script performs the following:
- Creates
cluster
,wan1
, andwan2
directories containing OpenShift configuration files. - Updates
secret.yaml
with the encrypted Hazelcast license key.
PadoGrid runs as a non-root user that requires read/write permissions to the persistent volume. Let's add your project's default user to the anyuid SCC.
oc edit scc anyuid
Add your project under the users:
section. For example, if your projects are wan1
and wan2
then add the following line.
users:
- system:serviceaccount:wan1:default
- system:serviceaccount:wan2:default
We need to setup cluster-level objects to enable project-to-project communications. The init_cluster
script is provided to accomplish the following:
- Apply CustomResourceDefintion for Hazelcast Operator
- Apply ClusterRole for Hazelcast Operator and Hazelcast
cd_k8s oc_helm_wan/bin_sh
./init_cluster
❗ Skip this section if are using OpenShift 4.x.
- Create NetworkPolicy Objects for both projects
cd_k8s oc_helm_wan/bin_sh
./init_netpol
You can view the NetworkPolicy objects as follows.
# Verify the cluster has 'ovs-networkpolicy'
oc get clusternetwork
# List NetworkPolicy objects in the current project
oc get netpol
# Display detailed information on the named NetworkPolicy object
oc describe netpol <name>
# Display ywam output of the named NetworkPolicy object
oc get netpol <name> -o yaml
✏️ NetworkPolicy is project scoped such that it will be deleted when the project is deleted.
✏️ The start_hazelcast
script is provided to start a Hazelcast cluster. You can optionally specify the Hazelcast version as shown below. Run ./start_hazelcast -?
to see the usage.
start_hazelcast wan1|wan2 [hazelcast_version] [-?]
Launch the Hazelcast cluster in the $PROJECT_WAN2
project first. Since Hazelcast currently does not provide the WAN discovery service, we must first start the target cluster and get its member cluster IP addresses.
cd_k8s oc_helm_wan/bin_sh
./start_hazelcast $PROJECT_WAN2
Hazelcast has been configured with securityContext enabled. It might fail to start due to the security constraint of fsGroup. Check the StatefulSet events using the describe
command as follows.
oc describe --namespace=$PROJECT_WAN2 statefulset hazelcast-enterprise
Output:
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreate 46s (x15 over 2m8s) statefulset-controller create Pod hazelcast-enterprise-0 in StatefulSet hazelcast-enterprise failed error: pods "hazelcast-enterprise-0" is forbidden: unable to validate against any security context constraint: [fsGroup: Invalid value: []int64{1000750000}: 1000750000 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 1000750000: must be in the ranges: [1000770000, 1000779999]]```
If you see the warning event similar to the above then you need to enter the valid value in the wan2/hazelcast/values.yaml
file as follows.
cd_k8s oc_helm_wan
vi wan2/hazelcast/values.yaml
For our example, we would enter a valid value in the values.yaml
file as follows.
# Security Context properties
securityContext:
# enabled is a flag to enable Security Context
enabled: true
# runAsUser is the user ID used to run the container
runAsUser: 1000770000
# runAsGroup is the primary group ID used to run all processes within any container of the pod
runAsGroup: 1000770000
# fsGroup is the group ID associated with the container
fsGroup: 1000770000
...
Restart (stop and start) the Hazelcast cluster as follows.
cd_k8s oc_helm_wan/bin_sh
./stop_hazelcast $PROJECT_WAN2
./start_hazelcast $PROJECT_WAN2
Wait till the $PROJECT_WAN2
cluster has all three (3) pods running. You can run the show_member_ips
script to monitor the cluster IP addresses as follows.
watch ./show_hazelcast_ips $PROJECT_WAN2
Output:
Project: wan2
Arg: wan2
Hazelcast Cluster IP Addresses Determined:
10.128.4.170:5701
10.131.2.155:5701
10.130.2.161:5701
Service DNS: hazelcast-enterprise.wan2.svc.cluster.local
Once $PROJECT_WAN2
cluster has all the Hazelcast members running, run the init_wan1
script to intialize the Hazelcast configuration files for the $PROJECT_WAN1
project. The init_wan1
script updates the wan1/hazelcast/hazelcast.yaml
file with the $PROJECT_WAN2
Hazelcast IP addresses for the WAN publisher.
cd_k8s oc_helm_wan/bin_sh
./init_wan1
Now, launch the Hazelcast cluster in the $PROJECT_WAN1
project.
cd_k8s oc_helm_wan/bin_sh
./start_hazelcast $PROJECT_WAN1
Follow th steps in the previous section to verify the wan1 cluster status and fix the security context contraints issue as needed.
View services:
# Display services
oc get --namespace=$PROJECT_WAN1 svc
oc get --namespace=$PROJECT_WAN2 svc
Output:
$PROJECT_WAN1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hazelcast-enterprise ClusterIP None <none> 5701/TCP 2m11s
hazelcast-enterprise-mancenter LoadBalancer 172.30.19.221 <pending> 8080:31717/TCP,443:32006/TCP 2m11s
$PROJECT_WAN2
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
hazelcast-enterprise ClusterIP None <none> 5701/TCP 17m
hazelcast-enterprise-mancenter LoadBalancer 172.30.10.13 <pending> 8080:32156/TCP,443:30792/TCP 17m
Create routes:
oc expose --namespace=$PROJECT_WAN1 svc hazelcast-enterprise-mancenter
oc expose --namespace=$PROJECT_WAN2 svc hazelcast-enterprise-mancenter
View routes:
oc get route --namespace=$PROJECT_WAN1
oc get route --namespace=$PROJECT_WAN2
Output:
$PROJECT_WAN1
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
hazelcast-enterprise-mancenter hazelcast-enterprise-mancenter-wan1.apps-crc.testing hazelcast-enterprise-mancenter http None
$PROJECT_WAN2
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
hazelcast-enterprise-mancenter hazelcast-enterprise-mancenter-wan2.apps-crc.testing hazelcast-enterprise-mancenter http None
Management Center URLs:
Use your HOST/PORT names to form the Managemen Center URLs. For our example above, the Management Center URLs are as follows:
WAN1: http://hazelcast-enterprise-mancenter-wan1.apps-crc.testing
WAN2: http://hazelcast-enterprise-mancenter-wan2.apps-crc.testing
Open the browser with both Mangement Center URLs and login using the user name admin
and the password of your choice. Place the brower windows side by side and monitor the WAN replication activities. The following figures show the Management Center views after we have ingested data in Section 9.
WAN1 Management Center
WAN2 Management Center
Start PadoGrid in the $PROJECT_WAN1
project. We will use PadoGrid to ingest data into the wan1 cluster, which in turn will replicate the data to the wan2 cluster.
cd_k8s oc_helm_wan/bin_sh
./start_padogrid wan1
The start_padogrid
script sets the Hazelcast service and the namespace so that the perf_test
app can auto-configure the DNS address when connencting to the Hazelcast cluster.
If perf_test
in the next section fails to connect to the Hazelcast cluster then you may need to manually configure the Hazelcast client as described in Section 10.
Login to the PadoGrid pod in the first project, i.e., $PROJECT_WAN1
.
cd_k8s oc_helm_wan/bin_sh
oc project $PROJECT_WAN1
./login_padogrid_pod
Create the perf_test
app.
create_app -product hazelcast
switch_cluster myhz
Ingest eligibility and profile blobs into Hazelcast in $PROJECT_WAN1
.
cd_app perf_test/bin_sh
./test_ingestion -run
Read ingested eligibility and profile blobs from Hazelcast in $PROJECT_WAN1
.
cd_app perf_test/bin_sh
./read_cache eligibility
./read_cache profile
✏️ This section requires your pod to have access to the Internet, otherwise, the build_app
script will fail.
If you want to ingest additional data that are not blobs, then first build the perf_test
and run test_group
as shown below.
Ingest customers and orders into Hazelcast in $PROJECT_WAN1
.
cd_app perf_test/bin_sh
./build_app
./test_group -run -prop ../etc/group-factory.properties
Read ingested customers and orders from Hazelcast in $PROJECT_WAN1
.
cd_app perf_test/bin_sh
./read_cache nw/customers
./read_cache nw/orders
Exit from the PadoGrid pod.
exit
The test_ingestion
and test_group
scripts may fail to connect to the Hazelcast cluster if you start PadoGrid before you start Hazelcast. In that case, restarting PadoGrid should fix the problem. If it still fails even after you started Hazelcast before PadoGrid, then you can manually enter the DNS address in the etc/hazelcast-client-k8s.xml
file as described below.
cd_app perf_test
vi etc/hazelcast-client-k8s.xml
Enter the following in the etc/hazelcast-client-k8s.xml
file. hazelcast-enterprise
is the service and wan1
is the project name.
<kubernetes enabled="true">
<service-dns>hazelcast-enterprise.wan1.svc.cluster.local</service-dns>
</kubernetes>
❗ The cleanup script may hang due to a known customer resource finalizer issue [3]. If it hangs, then Ctrl+C and run it again. The cleanp
script remove the CRD finalizers before deleting the CRD but you might need to run it twice to overcome the hanging issue.
cd_k8s oc_helm_wan/bin_sh
# Cleanup all. Ctr-C and run it again if it hangs. See [3].
./cleanup -all
# Delete projects
oc delete project $PROJECT_WAN1
oc delete project $PROJECT_WAN2
- How to use NetworkPolicy objects to connect services between projects, https://docs.ukcloud.com/articles/openshift/oshift-how-use-netpol.html
- Network Polices, https://kubernetes.io/docs/concepts/services-networking/network-policies/
- Custom resources with finalizers can "deadlock" customresourcecleanup.apiextensions.k8s.io finalizer #60538, kubernetes/kubernetes#60538
PadoGrid | Catalogs | Manual | FAQ | Releases | Templates | Pods | Kubernetes | Docker | Apps | Quick Start