A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from https://www.enterprisedb.com/blog/current-state-major-postgresql-upgrades-cloudnativepg-kubernetes below:

PostgreSQL Major Upgrades with CloudNativePG and Kubernetes Clusters

Leverage CloudNativePG to ensure smooth transitions and minimal downtime

In a previous blog post, Leverage a New Way to Import an Existing Postgres Database to Kubernetes, we discussed migrating a PostgreSQL 10 database from RDS to a CloudNativePG (CNPG) managed PostgreSQL database cluster within a Kubernetes environment. This utilized the database import capability, leveraging logical backup and restore methods such as pg_dump and pg_restore.

However, this type of database migration has a significant limitation: it does not capture changes made to the origin database once the process begins. Consequently, transitioning from the old to the new database requires you to halt write operations on the origin's primary database until the migration is complete. This approach is known as offline database import, and when upgrading to a new major PostgreSQL version, it becomes an offline major upgrade.

This process allows you to:

The first point assesses readiness for a major version upgrade, while the second considers whether an offline major upgrade aligns with your organization's tolerance for downtime during the database cutover. If cutover timing is suitable, you can proceed with an offline upgrade. Otherwise, you might explore using PostgreSQL's native logical replication to minimize cutover downtime to nearly zero.

This article aims to update on achieving approximately zero cutover time with CloudNativePG when performing database imports or major version upgrades of PostgreSQL within a Kubernetes cluster. Known as online database import and major online upgrades, these processes ensure the destination database uses a higher major PostgreSQL version than the original, which is the main focus of this article.

Major Online Upgrades

Major online upgrades with PostgreSQL have been an essential technique for many years. Originally, these upgrades relied on PostgreSQL's extensibility and triggers. The central concept involved using triggers to route changes in the database to a queue, which consumers would then process and apply to the destination database. One prominent tool for this was Londiste, developed at Skype, which handled data and sequence synchronization, ensuring zero cutover time. However, the trigger-based approach had notable drawbacks, particularly its impact on the primary database. Each change initiated an event stored within the queue (inserts, updates, deletes), represented by rotating tables.

With the introduction of logical decoding in PostgreSQL 9.4, extensions like pglogical 2 became popular for performing major online upgrades. Logical replication decoding proved to be lighter than the trigger-based method, easing performance burdens on pglogical performance. With the release of PostgreSQL 10, the database introduced native support for logical replication via publications and subscriptions, which has become crucial for CloudNativePG (CNPG) major version upgrade practices. This article will leverage these native PostgreSQL capabilities to perform major online upgrades within CloudNativePG.

Before we continue, it's crucial to acknowledge certain limitations of logical replication in PostgreSQL, including:

In practical terms, you are responsible for ensuring the following during a cloudnativepg major upgrade:

Migrating large objects if any exist. Today, binary data in PostgreSQL is typically stored as bytea fields, so this issue may not arise.

The Database of Origin

The source database is the one you aim to migrate to Kubernetes using CloudNativePG. This database could be hosted anywhere, such as in an Amazon RDS instance like the one detailed in Leverage a New Way to Import an Existing Postgres Database to Kubernetes. Alternatively, it could exist within the same Kubernetes ecosystem, perhaps managed by another operator or already harnessed within CloudNativePG, such as when upgrading Postgres 13 to Postgres 16.

The process remains consistent regardless of the initial setup. To proceed, certain prerequisites must be fulfilled:

These steps should typically be automated, employing scripts to simplify execution. For illustration, the following YAML configuration for CloudNativePG demonstrates how to initiate a PostgreSQL 13 instance named pg13 within your Kubernetes cluster, meeting the above conditions:

apiVersion: v1
data:
  password: QW5ndXNZb3VuZzE5NTU=
  username: YW5ndXM=
kind: Secret
metadata:
  name: angus
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: pg13
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:13
  enableSuperuserAccess: false
  instances: 1
  storage:
    size: 1Gi
  bootstrap:
    initdb:
      dataChecksums: true
      walSegmentSize: 32
      postInitApplicationSQL:
      - CREATE TABLE songs (id SERIAL PRIMARY KEY, title TEXT)
      - ALTER TABLE songs OWNER TO app
      - INSERT INTO songs(title) VALUES ('Back in black')
      - CREATE PUBLICATION migrate FOR TABLE songs
  managed:
    roles:
    - name: angus
      ensure: present
      comment: User for logical replication connections
      login: true
      replication: true
      inRoles:
      - app
      passwordSecret:
        name: angus
  postgresql:
    pg_hba:
    - hostssl replication angus 10.0.0.0/8 md5

The YAML utilizes a standard PostgreSQL 13 image (imageName) from the CloudNativePG community repository. The bootstrap section executes several SQL commands executed by the postgres user to configure the application database (app). These commands include:

Within the .spec.managed.roles section, the user angus (name assigned arbitrarily) is established to facilitate connectivity from the target database, ensuring:

kubectl create secret \
  --dry-run=client -o yaml generic angus \
  --from-literal=username=angus \
  --from-literal=password=AngusYoung1955

Proceed by applying the provided manifest to your Kubernetes cluster. For those without a Kubernetes setup, consider utilizing KinD (Kubernetes in Docker) as explained in CloudNativePG quickstart.

After a few seconds, your pg13 cluster should be up:

$ kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
pg13-1   1/1     Running   0          29s

You can now verify the songs table is in the app database and is owned by the app user:

$ kubectl exec -ti pg13-1 -c postgres -- psql app -c '\d'
            List of relations
 Schema |     Name     |   Type   | Owner
--------+--------------+----------+-------
 public | songs        | table    | app
 public | songs_id_seq | sequence | app
(2 rows)

Let’s now query the songs table:

$ kubectl exec -ti pg13-1 -c postgres -- psql app -c 'SELECT * FROM songs'
 id |     title
----+---------------
  1 | Back in black
(1 row)

As expected, the angus user exists and is affiliated with the app role:

$ kubectl exec -ti pg13-1 -c postgres -- psql app -c '\du angus'
               List of roles
    Role name    | Attributes  | Member of
-----------------+-------------+-----------
 angus | Replication | {app}

Finally, let’s check the publication:

$ kubectl exec -ti pg13-1 -c postgres -- psql app -c '\dRp+'
                            Publication migrate
  Owner   | All tables | Inserts | Updates | Deletes | Truncates | Via root
----------+------------+---------+---------+---------+-----------+----------
 postgres | f          | t       | t       | t       | t         | f
Tables:
    "public.songs"

Everything is in line with our configuration file.

The Database of Destination

While the source database can vary, the destination database will reliably utilize CloudNativePG, allowing us predictable control over its setup. Our goal here is to create a new PostgreSQL 16 instance, referred to as pg16. Although PostgreSQL 16 is in beta at the time of writing, using it facilitates robust testing ahead of the anticipated official release by the PostgreSQL Global Development Group (PGDG) later in 2023.

In this example, assume both pg13 and pg16 are housed in the same namespace within a Kubernetes cluster, though this setup is adaptable to different environments.

apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
  name: pg16
spec:
  imageName: ghcr.io/cloudnative-pg/postgresql:16
  enableSuperuserAccess: false
  instances: 1
  storage:
    size: 1Gi
  bootstrap:
    initdb:
      dataChecksums: true
      walSegmentSize: 32
      import:
        schemaOnly: true
        type: microservice
        databases:
        - app
        source:
          externalCluster: pg13
        postImportApplicationSQL:
        - CREATE SUBSCRIPTION migrate CONNECTION 'host=pg13-rw user=angus dbname=app sslmode=require password=AngusYoung1955' PUBLICATION migrate
  externalClusters:
  - name: pg13
    connectionParameters:
      # Use the correct IP or host name for the source database
      host: pg13-rw
      user: angus
      dbname: postgres
    password:
      name: angus
      key: password
As you can see, we create a new 

Here, we create a new Postgres instance from scratch and, upon setup, import only the schema of the app database from the pg13 cluster, as detailed under the externalClusters section. This involves connecting to the primary endpoint designated by the pg13-rw service, utilizing the angus user credentials stored in the same secret as used for pg13.

On completing the import process, we request the operator to establish a SUBSCRIPTION to the migrate PUBLICATION on pg13. Note: Security-conscious users will observe a plaintext password in the subscription connection string. This will be rectified in an upcoming release (issue #2386) from version 1.20.2.

Apply the above manifest to your Kubernetes cluster. Following the import job, the pg16 cluster will become operational:

$ kubectl get pods
NAME     READY   STATUS    RESTARTS   AGE
pg13-1   1/1     Running   0          10m
pg16-1   1/1     Running   0          2m36s

The import phase successfully creates the database schema:

$ kubectl exec -ti pg16-1 -c postgres -- psql app -c '\d'
            List of relations
 Schema |     Name     |   Type   | Owner
--------+--------------+----------+-------
 public | songs        | table    | app
 public | songs_id_seq | sequence | app
(2 rows)

To verify, query the songs table in the pg16 instance's app database:

$ kubectl exec -ti pg16-1 -c postgres -- psql app -c 'SELECT * FROM songs'
 id |     title
----+---------------
  1 | Back in black
(1 row)

The presence of the record confirms successful logical replication. Next, add another song to the pg13 instance and verify its replication to pg16:

$ kubectl exec -ti pg13-1 -c postgres \
  -- psql app -c "INSERT INTO songs(title) VALUES('Thunderstruck')"
INSERT 0 1

Then check:

$ kubectl exec -ti pg16-1 -c postgres \
  -- psql app -c 'SELECT * FROM songs'
 id |     title
----+---------------
  1 | Back in black
  2 | Thunderstruck
(2 rows)

Replication works!

Before promoting the pg16 database to handle write operations from your application, ensure the songs_id_seq sequence is properly set using the setval() function to sync with the sequence in pg13.

Ready to Revolutionize Your Database Management

This article illustrates how, starting with CloudNativePG version 1.20.1, you can leverage the schemaOnly feature of the import capability to execute major online upgrades and seamless database imports using a blend of declarative and imperative methodologies.

Here’s the streamlined step-by-step online import and upgrade process:

We encourage feedback on the current methodology to enhance CloudNativePG, aiming for a fully automated and declarative process. Potential improvements encompass:

With the right application of native logical replication, migrating a PostgreSQL 10+ database into a CloudNativePG-managed cluster of the same or a newer major version can achieve near-zero cutover time. This methodology is equally effective for transitioning to CloudNativePG from other PostgreSQL operators in Kubernetes, minimizing downtime to negligible levels.

While requiring temporary additional resources, this method offers valuable benefits:

When CloudNativePG introduces pg_upgrade support (already achievable through hibernation capabilities when applied imperatively), you’ll have three distinct PostgreSQL database upgrade options:

Select the approach that aligns optimally with your strategic database management goals.

Join our CloudNativePG community chat and participate in our bi-weekly development meetings to share your insights, contributing to the progression of PostgreSQL solutions on Kubernetes.

For comprehensive support with the open-source stack, EDB, the creators of CloudNativePG and key PostgreSQL project contributors, offer our Community 360 services. We also empower you to deploy EDB Postgres Extended and EDB Postgres Advanced, featuring our Oracle compatibility layer, through EDB CloudNativePG Cluster – compatible with Red Hat OpenShift and available with Long Term Support terms. Engage with us to seamlessly integrate Postgres within your Kubernetes infrastructure!

Featured picture: 2 Container ships alongside in Le Verdon, a subsidiary of Bordeaux port, France, available under CC BY-SA 3.0.


RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4