kubectl apply --filename=https://github.com/cert-manager/cert-manager/releases/download/v1.20.0/cert-manager.yaml
This topic describes how to configure backup and restore for a CloudNativePG cluster using the Barman Cloud plugin with AWS S3 as the object store.
These instructions are intended for use with the setup described in the Concepts for single-cluster deployments guide. Use it together with the other building blocks outlined in the Building blocks single-cluster deployments guide.
| We provide these blueprints to show a minimal functionally complete example with a good baseline performance for regular installations. You would still need to adapt it to your environment and your organization’s standards and security best practices. |
Both the CloudNativePG operator and the Barman Cloud plugin are required in order to enable backup and restore operations using an S3-compatible object store.
Install the CloudNativePG Operator as described in Deploying CloudNativePG in multiple availability zones.
The Barman Cloud plugin requires cert-manager. Install it using the following commands or, alternatively, any other method described in the cert-manager installation docs:
kubectl apply --filename=https://github.com/cert-manager/cert-manager/releases/download/v1.20.0/cert-manager.yaml
Wait for the cert-manager deployments to be ready:
kubectl rollout status deployment --namespace=cert-manager cert-manager-webhook
kubectl rollout status deployment --namespace=cert-manager cert-manager
deployment "cert-manager-webhook" successfully rolled out
deployment "cert-manager" successfully rolled out
Install the Barman Cloud plugin:
kubectl apply --filename=https://github.com/cloudnative-pg/plugin-barman-cloud/releases/download/v0.11.0/manifest.yaml
Wait for the Barman Cloud deployment to be ready:
kubectl rollout status deployment --namespace=cnpg-system barman-cloud
deployment "barman-cloud" successfully rolled out
This blueprint uses AWS S3 as the object store. Refer to the Barman Cloud plugin object stores documentation for details on how to configure other cloud providers.
This blueprint uses AWS access keys for authentication. Other authentication methods, such as IAM Roles for Service Accounts (IRSA), are also supported. Refer to the Barman Cloud plugin AWS S3 documentation for more details.
Create a Kubernetes secret with the AWS credentials required to access the S3 bucket. The secret must be created in the same namespace as the CloudNativePG cluster.
kubectl create secret generic aws-creds \
--namespace cnpg-keycloak \
--from-literal=ACCESS_KEY_ID=<access_key> \
--from-literal=ACCESS_SECRET_KEY=<secret_key> \
--from-literal=REGION=<region>
The ObjectStore resource defines the S3 bucket destination and encryption settings for backups and WAL archiving.
The example below uses AWS S3, but the Barman Cloud plugin also supports Azure Blob Storage and Google Cloud Storage.
For details on configuring other cloud providers, refer to the Barman Cloud plugin object stores documentation.
|
The S3 bucket must be created before configuring the |
Create an object-store.yaml file based on the following content:
apiVersion: barmancloud.cnpg.io/v1
kind: ObjectStore
metadata:
name: cnpg-store
spec:
configuration:
destinationPath: s3://<bucket-name>/<backup-path>/ (1)
s3Credentials:
accessKeyId:
name: aws-creds (2)
key: ACCESS_KEY_ID
secretAccessKey:
name: aws-creds
key: ACCESS_SECRET_KEY
region:
name: aws-creds
key: REGION
wal: (3)
encryption: AES256
compression: gzip
maxParallel: 8
data: (4)
compression: gzip
encryption: AES256
| 1 | The S3 bucket destination path for backups.
Replace <bucket-name> and <backup-path> with the appropriate values. |
| 2 | References the aws-creds secret created in the previous step. |
| 3 | WAL archiving configuration with server-side encryption and gzip compression. maxParallel controls the number of WAL files to be archived in parallel.
For other supported compression algorithms, refer to the Barman Cloud plugin compression documentation. |
| 4 | Base backup data configuration with compression and encryption. |
Apply the ObjectStore resource:
kubectl apply --namespace=cnpg-keycloak --filename=object-store.yaml
| Adding the Barman Cloud plugin to a running CloudNativePG cluster may cause downtime as the Pods are restarted. |
Create the CloudNativePG Cluster resource to enable backup and WAL archiving to the object store using the Barman Cloud plugin.
Create the cluster.yaml file to include the plugins section:
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: cnpg-keycloak
spec:
instances: 3 (1)
storage:
size: 8Gi (2)
affinity: (3)
podAntiAffinityType: required
topologyKey: topology.kubernetes.io/zone
postgresql:
synchronous: (4)
method: any
number: 1
dataDurability: required
parameters:
max_connections: "100" (5)
bootstrap:
initdb: (6)
database: keycloak
owner: keycloak
managed:
services:
disabledDefaultServices: ["ro", "r"] (7)
plugins: (8)
- name: barman-cloud.cloudnative-pg.io
isWALArchiver: true
parameters:
barmanObjectName: cnpg-store
| 1 | Number of instances. |
| 2 | Pod storage size. This setting needs to take into account the expected size of the database and PostgreSQL WAL logs. |
| 3 | Pod affinity rules for Kubernetes scheduler.
The topology.kubernetes.io/zone value ensures the scheduler will spread the pods across different availability zones. |
| 4 | Enables quorum-based synchronous replication with a single standby server. For more information about synchronous replication, see the CloudNativePG documentation. |
| 5 | Database connection limit. This value should be adjusted based on the expected total number of JDBC connections from the Keycloak cluster. |
| 6 | Creates a database keycloak owned by the user keycloak. |
| 7 | Disables the -ro and -r default services, which are intended for read-only applications.
Since Keycloak requires read-write access, it only connects to the -rw service. |
| 8 | Enables the Barman Cloud plugin for WAL archiving.
The barmanObjectName references the ObjectStore resource created in the previous step. |
Apply the updated cluster resource:
kubectl -n cnpg-keycloak apply -f cluster.yaml
Wait for the cnpg-keycloak cluster to get into the Ready state.
kubectl -n cnpg-keycloak wait --for condition=Ready --timeout=300s cluster cnpg-keycloak
cluster.postgresql.cnpg.io/cnpg-keycloak condition met
The ScheduledBackup resource enables automatic periodic backups of the CloudNativePG cluster.
Scheduled backups are the recommended way to implement a reliable backup strategy.
Create a scheduled-backup.yaml file based on the following content:
apiVersion: postgresql.cnpg.io/v1
kind: ScheduledBackup
metadata:
name: cnpg-keycloak-scheduled-backup
spec:
schedule: "0 0 0 * * *" (1)
backupOwnerReference: self (2)
cluster:
name: cnpg-keycloak (3)
method: plugin (4)
pluginConfiguration:
name: barman-cloud.cloudnative-pg.io (5)
immediate: true (6)
suspend: false (7)
| 1 | Cron schedule expression using a six-field format that includes seconds: seconds minutes hours day-of-month month day-of-week.
This example runs at midnight every day.
Adjust the schedule based on the Recovery Point Objective (RPO) requirements. |
| 2 | Sets the ownership reference for the backup objects. self means the ScheduledBackup resource owns the created backups, and deleting it will also delete all associated backups. |
| 3 | The name of the CloudNativePG cluster to back up. |
| 4 | The backup method. plugin delegates the backup operation to the configured plugin. |
| 5 | The Barman Cloud plugin that performs the backup to the object store. |
| 6 | Triggers a backup immediately upon creation of the ScheduledBackup resource, in addition to the configured schedule. |
| 7 | When set to true, temporarily suspends scheduled backups without deleting the resource. |
Apply the ScheduledBackup resource:
kubectl apply --namespace=cnpg-keycloak --filename=scheduled-backup.yaml
Verify the scheduled backup has been created:
kubectl -n cnpg-keycloak get scheduledbackups
NAME AGE CLUSTER LAST BACKUP
cnpg-keycloak-scheduled-backup 30s cnpg-keycloak
After successful configuration of the backup, continue with Deploying Keycloak across multiple availability-zones with the Operator.
For details of how to restore a CloudNativePG cluster from a backup, see Recovering a CloudNativePG cluster from an S3 backup.
For more information about backup and recovery operations, refer to the CloudNativePG documentation.