All stuff
5 min read

Automated Multi-Server PostgreSQL Backups with a Simple Bash Script

A JSON-configured Bash script that backs up every database on every PostgreSQL server you own, with daily/monthly rotation and automatic retention cleanup.

Automated Multi-Server PostgreSQL Backups with a Simple Bash Script
+

I needed a way to back up PostgreSQL databases across multiple servers without installing anything heavy. The requirements were simple:

  • Back up every non-system database on each server
  • Organize backups by server and database name
  • Keep daily backups with configurable retention
  • Create monthly snapshots on the 1st
  • Drive the whole thing from a single JSON config file

Here's what I ended up with.


Requirements

  • bash
  • jq — for parsing the JSON configuration
  • pg_dump / psql — PostgreSQL client tools

The Config File

Create a config.json in the same directory as the script. It defines where to store backups, how long to keep them, and which servers to hit:

1{
2 "backup_dir": "/backups/postgres",
3 "retention_days": 7,
4 "servers": [
5 {
6 "host": "db-primary.local",
7 "user": "backup_user"
8 },
9 {
10 "host": "db-replica.local",
11 "user": "backup_user"
12 }
13 ]
14}
1{
2 "backup_dir": "/backups/postgres",
3 "retention_days": 7,
4 "servers": [
5 {
6 "host": "db-primary.local",
7 "user": "backup_user"
8 },
9 {
10 "host": "db-replica.local",
11 "user": "backup_user"
12 }
13 ]
14}

Add as many servers as you need. The script iterates through all of them. Passwords are not stored here — they're managed via .pgpass (more on that below).


The Script

Create postgres_backup.sh:

1#!/bin/bash
2 
3set -e
4 
5# Check for jq dependency
6if ! command -v jq &> /dev/null; then
7 echo "jq is required but not installed."
8 exit 1
9fi
10 
11SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
12CONFIG_FILE="${SCRIPT_DIR}/config.json"
13 
14read_config_and_check_dir() {
15 if [ ! -f "$CONFIG_FILE" ]; then
16 echo "Configuration file not found: $CONFIG_FILE"
17 exit 1
18 fi
19 
20 BACKUP_DIR=$(jq -r '.backup_dir' "$CONFIG_FILE")
21 RETENTION_DAYS=$(jq -r '.retention_days' "$CONFIG_FILE")
22 
23 if [ ! -d "$BACKUP_DIR" ]; then
24 echo "Creating backup directory: $BACKUP_DIR"
25 mkdir -p "$BACKUP_DIR"
26 if [ $? -ne 0 ]; then
27 echo "Failed to create backup directory."
28 exit 1
29 fi
30 fi
31 
32 echo "Using backup directory: $BACKUP_DIR"
33}
34 
35create_backup() {
36 local HOST=$1
37 local USER=$2
38 local DB=$3
39 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB"
40 
41 mkdir -p "$BACKUP_PATH/daily"
42 mkdir -p "$BACKUP_PATH/monthly"
43 
44 echo "Backing up database: $DB"
45 pg_dump -h $HOST -U $USER \
46 -d "$DB" -F c -f "$BACKUP_PATH/daily/$DB-$DATE.dump"
47 
48 # Monthly snapshot on the 1st
49 if [ "$(date +%d)" = "01" ]; then
50 cp "$BACKUP_PATH/daily/$DB-$DATE.dump" \
51 "$BACKUP_PATH/monthly/$DB-$MONTH.dump"
52 fi
53}
54 
55cleanup_old_backups() {
56 local HOST=$1
57 local DB=$2
58 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB/daily"
59 
60 find "$BACKUP_PATH" -name "*.dump" -type f \
61 -mtime +$RETENTION_DAYS -delete
62}
63 
64list_databases() {
65 local HOST=$1
66 local USER=$2
67 
68 psql -h "$HOST" -U "$USER" -d "postgres" \
69 -t -c "SELECT datname FROM pg_database
70 WHERE datistemplate = false
71 AND datname != 'postgres';" \
72 | grep -v '^$' | sed 's/^ *//g'
73}
74 
75main() {
76 read_config_and_check_dir
77 
78 DATE=$(date +"%Y-%m-%d")
79 MONTH=$(date +"%Y-%m")
80 
81 echo "Starting backup process..."
82 
83 jq -c '.servers[]' "$CONFIG_FILE" | while read -r server; do
84 HOST=$(echo $server | jq -r '.host')
85 USER=$(echo $server | jq -r '.user')
86 
87 echo "Backing up server: $HOST"
88 
89 DATABASES=$(list_databases "$HOST" "$USER")
90 
91 if [ -z "$DATABASES" ]; then
92 echo "No databases found on $HOST. Skipping."
93 continue
94 fi
95 
96 echo "Databases found:"
97 echo "$DATABASES"
98 
99 echo "$DATABASES" | while read -r DB; do
100 create_backup "$HOST" "$USER" "$DB"
101 cleanup_old_backups "$HOST" "$DB"
102 done
103 done
104 
105 echo "Backup process completed."
106}
107 
108main
1#!/bin/bash
2 
3set -e
4 
5# Check for jq dependency
6if ! command -v jq &> /dev/null; then
7 echo "jq is required but not installed."
8 exit 1
9fi
10 
11SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
12CONFIG_FILE="${SCRIPT_DIR}/config.json"
13 
14read_config_and_check_dir() {
15 if [ ! -f "$CONFIG_FILE" ]; then
16 echo "Configuration file not found: $CONFIG_FILE"
17 exit 1
18 fi
19 
20 BACKUP_DIR=$(jq -r '.backup_dir' "$CONFIG_FILE")
21 RETENTION_DAYS=$(jq -r '.retention_days' "$CONFIG_FILE")
22 
23 if [ ! -d "$BACKUP_DIR" ]; then
24 echo "Creating backup directory: $BACKUP_DIR"
25 mkdir -p "$BACKUP_DIR"
26 if [ $? -ne 0 ]; then
27 echo "Failed to create backup directory."
28 exit 1
29 fi
30 fi
31 
32 echo "Using backup directory: $BACKUP_DIR"
33}
34 
35create_backup() {
36 local HOST=$1
37 local USER=$2
38 local DB=$3
39 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB"
40 
41 mkdir -p "$BACKUP_PATH/daily"
42 mkdir -p "$BACKUP_PATH/monthly"
43 
44 echo "Backing up database: $DB"
45 pg_dump -h $HOST -U $USER \
46 -d "$DB" -F c -f "$BACKUP_PATH/daily/$DB-$DATE.dump"
47 
48 # Monthly snapshot on the 1st
49 if [ "$(date +%d)" = "01" ]; then
50 cp "$BACKUP_PATH/daily/$DB-$DATE.dump" \
51 "$BACKUP_PATH/monthly/$DB-$MONTH.dump"
52 fi
53}
54 
55cleanup_old_backups() {
56 local HOST=$1
57 local DB=$2
58 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB/daily"
59 
60 find "$BACKUP_PATH" -name "*.dump" -type f \
61 -mtime +$RETENTION_DAYS -delete
62}
63 
64list_databases() {
65 local HOST=$1
66 local USER=$2
67 
68 psql -h "$HOST" -U "$USER" -d "postgres" \
69 -t -c "SELECT datname FROM pg_database
70 WHERE datistemplate = false
71 AND datname != 'postgres';" \
72 | grep -v '^$' | sed 's/^ *//g'
73}
74 
75main() {
76 read_config_and_check_dir
77 
78 DATE=$(date +"%Y-%m-%d")
79 MONTH=$(date +"%Y-%m")
80 
81 echo "Starting backup process..."
82 
83 jq -c '.servers[]' "$CONFIG_FILE" | while read -r server; do
84 HOST=$(echo $server | jq -r '.host')
85 USER=$(echo $server | jq -r '.user')
86 
87 echo "Backing up server: $HOST"
88 
89 DATABASES=$(list_databases "$HOST" "$USER")
90 
91 if [ -z "$DATABASES" ]; then
92 echo "No databases found on $HOST. Skipping."
93 continue
94 fi
95 
96 echo "Databases found:"
97 echo "$DATABASES"
98 
99 echo "$DATABASES" | while read -r DB; do
100 create_backup "$HOST" "$USER" "$DB"
101 cleanup_old_backups "$HOST" "$DB"
102 done
103 done
104 
105 echo "Backup process completed."
106}
107 
108main

Setting It Up

1. Set up .pgpass for password-free authentication

Since the cron job runs as root, the file needs to live at /root/.pgpass:

1sudo touch /root/.pgpass
2sudo chmod 600 /root/.pgpass
3sudo nano /root/.pgpass
1sudo touch /root/.pgpass
2sudo chmod 600 /root/.pgpass
3sudo nano /root/.pgpass

Add one line per server using the format hostname:port:database:username:password. Use * for the database field to match all databases on that server:

1localhost:5432:*:local_pg_user:your_password
2db-replica.local:5432:*:backup_user:your_password
1localhost:5432:*:local_pg_user:your_password
2db-replica.local:5432:*:backup_user:your_password

2. Make the script executable and do a test run

1chmod +x postgres_backup.sh
2sudo ./postgres_backup.sh
1chmod +x postgres_backup.sh
2sudo ./postgres_backup.sh

Directory Structure

After a few days of running, your backup directory looks like this:

1/backups/postgres/
2├── db-primary.local/
3│ ├── myapp/
4│ │ ├── daily/
5│ │ │ ├── myapp-2026-03-28.dump
6│ │ │ ├── myapp-2026-03-29.dump
7│ │ │ └── myapp-2026-03-30.dump
8│ │ └── monthly/
9│ │ └── myapp-2026-03.dump
10│ └── analytics/
11│ ├── daily/
12│ └── monthly/
13└── db-replica.local/
14 └── ...
1/backups/postgres/
2├── db-primary.local/
3│ ├── myapp/
4│ │ ├── daily/
5│ │ │ ├── myapp-2026-03-28.dump
6│ │ │ ├── myapp-2026-03-29.dump
7│ │ │ └── myapp-2026-03-30.dump
8│ │ └── monthly/
9│ │ └── myapp-2026-03.dump
10│ └── analytics/
11│ ├── daily/
12│ └── monthly/
13└── db-replica.local/
14 └── ...

Each server gets its own directory, each database gets daily/ and monthly/ subdirectories, and retention cleanup only touches daily/.


Setting Up the Cron Job

Add it to root's crontab to run daily at 2 AM:

1sudo crontab -e
1sudo crontab -e
10 2 * * * /path/to/postgres_backup.sh
10 2 * * * /path/to/postgres_backup.sh

How It Works

  1. Reads config.json for backup directory, retention policy, and server list
  2. Connects to each server and queries pg_database for all non-system databases
  3. Dumps each database using pg_dump in custom format (-F c) — the most flexible format for restores
  4. Copies the daily dump to the monthly directory on the 1st of each month
  5. Cleans up daily dumps older than the configured retention period

Why Custom Format?

The -F c flag produces a custom-format archive that supports:

  • Selective restore (individual tables, schemas)
  • Parallel restore with pg_restore -j
  • Compression built-in (no need to pipe through gzip)

Restoring a Backup

1# Full database restore
2pg_restore -h localhost -U postgres -d myapp \
3 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump
4 
5# Restore a single table
6pg_restore -h localhost -U postgres -d myapp \
7 -t users \
8 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump
1# Full database restore
2pg_restore -h localhost -U postgres -d myapp \
3 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump
4 
5# Restore a single table
6pg_restore -h localhost -U postgres -d myapp \
7 -t users \
8 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump

Limitations and Improvements

This is intentionally simple. A few things to consider:

  • Credentials via .pgpass — Much better than passwords in a config file, but make sure the file is chmod 600 and owned by the user running the script.
  • No encryption at rest — The dumps sit unencrypted on disk. Pipe through gpg or store them on an encrypted volume if needed.
  • No alerting — If the cron job fails silently, you won't know. Pipe output to a log file and monitor it, or wrap the script in something that sends notifications on failure.
  • Sequential execution — Servers are backed up one at a time. For a large fleet, you could parallelize with xargs or GNU parallel.

For my use case — a handful of servers with modest databases — this has been running reliably for over a year without any changes.