Easy Postgres backups

How to use Kamal to backup and restore your Postgres database

An often overlooked area when you're standing up your application infrastructure is what to do in the face of disaster. How will you recover from data loss? What can you do now that your future self will thank you later on? In this post, I'll cover a way to help mitigate loss and aid in getting back online quickly. Kamal helps standing up a majority of your application, fast. You can do everything in this post with a small VPS.

This post will go over two different ways to manage Postgresql backups with Kamal. Your recovery ends up being a simple psql command running from your database container.

Both of the Docker projects we'll be taking advantage of have a similar approach to running on a schedule to create a backup. Taking it a bit further with Docker volumes and networks allows us to have an easier restoration in the case of database loss.

Example 1: Local Backups

In the first example, we'll utilize a docker image (postgres-backup-local) that will backup based on your schedule and keep the backups local to your network share. When we're working with keeping files and data around in Docker its important we work with volumes. If you're connecting to a file share locally a good starting point is the Docker documentation on volume drivers.

Note: you may need to install nfs-common on your virtual machine, if you're using Linux.

Once you've figured out where you need to connect, and you have the proper credentials and networking path aligned, you can create a new Docker volume similar below. This example is creating an NFS Docker volume with the name volume-name.

$ docker volume create --driver local --opt type=nfs --opt o=addr=IP-ADDRESS,rw --opt device=:/fileshare/path volume-name

In your Kamal deploy.yml configuration file you'll want to add a section under the accessories key, with your desired name. I've configured this accessory as db-backups.

    image: prodrigestivill/postgres-backup-local
      - web
        SCHEDULE: '@daily'
        POSTGRES_USER: postgres
        POSTGRES_DB: my-db
        POSTGRES_HOST: local-ip
      - volume-name:/backups
      "user": "postgres:postgres"

Hint: Some items to point out in the config code above. We're setting our volume to be mounted at the path /backups internal to the container running our accessory. Your POSTGRES_USER should be set to whatever your database user is for the current environment you're backing up.

You'll need to push your new configured environment variables to the server.

$ kamal env push

Boot the db-backups accessory. The result of this will be a container you can terminal into that runs on the schedule defined in your ENV variables.

$ kamal accessory boot db-backups

You should immediately see a file in the /backups/last directory which would be the last run of the backup.sh (found on the container). Keep in mind that /backups is the volume linked to your network share. If you don't see a Postgres backup file, double check that you can successfully connect to the accessory container and run bash backup.sh manually.

Restore database from backup

To restore the database simply login to the accessory that contains the link to the backups. (docker ps to find the container ID)

$ docker exec --tty --interactive $CONTAINER_ID /bin/sh -c "zcat /backups/last/$MY_DB-latest.sql.gz | psql --username=postgres --dbname=$MY_DB -W"

You can also mount the volume we created earlier for backups on your postgres database accessory. Then you can run the restore command above from your postgres container.

Example 2: Cloud Backups

In the second example, we'll utilize an object storage volume and the docker image (eeshugerman/postgres-backup-s3) to take backups on a schedule and push them to S3. This project has some nice features, so be sure to checkout the linked Github repository.

In your deploy.yml configuration file we'll want to create a new key that describes our s3 Postgres backup.

    image: eeshugerman/postgres-backup-s3:15
      - web
        SCHEDULE: '@midnight'
        BACKUP_KEEP_DAYS: 14
        - S3_REGION
        - S3_ACCESS_KEY_ID
        - S3_BUCKET
        - S3_PREFIX

You'll need to have set secret environment variables for S3 and Postgres and push those via Kamal:

$ kamal env push

In the deploy.yml example above, you can see I have the image tagged to version 15 of Postgres, you can use any version 12-16. We're explicitly telling this accessory that it will operate on our web server and sending the schedule and some rules along with it. Check the documentation for how the SCHEDULE variable works.

Boot the accessory.

$ kamal accessory boot s3-pgbackup

I have not had the chance to dive into adding a block storage device to docker, yet. I'm sure it's possible so the restore can act similarly to the first example for restoring. (I'll revisit and update the post if I do.)

Set a reminder to double check that the container is running backups on schedule at your S3 location.

That's all for now!