This topic describes how to deploy a CloudNativePG cluster across multiple availability zones to tolerate one or more availability zone failures in a given AWS region.
This deployment is intended to be used with the setup described in the Concepts for single-cluster deployments guide. Use this deployment with the other building blocks outlined in the Building blocks single-cluster deployments guide.
| We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization’s standards and security best practices. |
CloudNativePG is an opensource operator that manages PostgreSQL clusters on Kubernetes. It is designed to operate one primary writer instance and optional reader instances.
Install the operator directly using the operator manifest:
kubectl apply --server-side -f \
https://raw.githubusercontent.com/cloudnative-pg/cloudnative-pg/release-1.28/releases/cnpg-1.28.1.yaml
Use the following command to verify the installation status:
kubectl rollout status deployment \
-n cnpg-system cnpg-controller-manager
deployment "cnpg-controller-manager" successfully rolled out
It is possible to install the operator using other supported methods such as Helm Chart, OLM, or cnpg plugin for kubectl. See the CloudNativePG documentation for details.
|
We recommend enabling backups for the CloudNativePG cluster to protect against data loss. See Deploying CloudNativePG with scheduled backups to S3 for instructions on configuring scheduled backups to AWS S3. |
Installation and configuration of CloudNativePG cluster is done via a Cluster resource.
Create a cluster.yaml file based on the following content:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cnpg-keycloak
spec:
instances: 3 (1)
storage:
size: 8Gi (2)
affinity: (3)
podAntiAffinityType: required
topologyKey: topology.kubernetes.io/zone
postgresql:
synchronous: (4)
method: any
number: 1
parameters:
max_connections: "100" (5)
bootstrap:
initdb: (6)
database: keycloak
owner: keycloak
managed:
services:
disabledDefaultServices: ["ro", "r"] (7)
| 1 | Number of instances. |
| 2 | Pod storage size. This setting needs to take into account the expected size of the database and PostgreSQL WAL logs. |
| 3 | Pod affinity rules for Kubernetes scheduler. The topology.kubernetes.io/zone value ensures the scheduler will spread the pods across different availability zones. |
| 4 | Enable quorum-based synchronous replication with a single standby server. For more information about synchronous replication follow the CloudNativePG documentation. |
| 5 | Maximum number of concurrent connections to the database server. This value needs to be adjusted based on the expected maximum number of connections from the Keycloak cluster. For example, if the Keycloak cluster has 3 instances with a maximum number of 30 JDBC connections each (see the Keycloak option db-pool-max-size), the value of spec.postgresql.max_connections needs to be at least 90 to account for the required connection capacity. See {links_high-availability_single-cluster-db-concepts_name} for more details. |
| 6 | Creates a database keycloak owned by the user keycloak. |
| 7 | Disables the -ro and -r default services which are intended for read-only applications. Since Keycloak requires a read-write access it only connects to the -rw service. |
Create the cnpg-keycloak namespace.
kubectl create ns cnpg-keycloak
Create the cnpg-keycloak cluster resource by applying the cluster.yaml file.
kubectl -n cnpg-keycloak apply -f cluster.yaml
Wait for the cnpg-keycloak cluster to get into the Ready state.
kubectl -n cnpg-keycloak wait --for condition=Ready --timeout=300s cluster cnpg-keycloak
cluster.postgresql.cnpg.io/cnpg-keycloak condition met
Optionally, view the cnpg-keycloak cluster pods and their roles.
kubectl -n cnpg-keycloak get pods -L role
NAME READY STATUS RESTARTS AGE ROLE
cnpg-keycloak-1 1/1 Running 0 10m primary
cnpg-keycloak-2 1/1 Running 0 10m replica
cnpg-keycloak-3 1/1 Running 0 10m replica
Monitoring of CloudNativePG clusters leverages Prometheus and Grafana.
Installation of the Prometheus stack is out of scope of this guide. Basic installation steps can be found in the Quickstart section of the CloudNativePG documentation. Additional details on how to customize the monitoring can be found in the Monitoring section of the CloudNativePG documentation.
Assuming Prometheus and Grafana are installed on the Kubernetes cluster the following steps can be used to enable monitoring for a particular CloudNativePG cluster:
Enable collection of metrics by creating a PodMonitor resource:
kubectl -n cnpg-keycloak apply -f - <<EOF
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: cnpg-keycloak-pod-monitor
spec:
selector:
matchLabels:
cnpg.io/cluster: cnpg-keycloak (1)
podMetricsEndpoints:
- port: metrics
EOF
| 1 | Name of the CloudNativePG cluster to be monitored. |
Add the grafana-dashboard.json from the cloudnative-pg/grafana-dashboards GitHub project to your Grafana instance.
After successful deployment of the CloudNativePG database continue with Deploying Keycloak across multiple availability-zones with the Operator