Automated Multi-Server PostgreSQL Backups with a Simple Bash Script
A JSON-configured Bash script that backs up every database on every PostgreSQL server you own, with daily/monthly rotation and automatic retention cleanup.
I needed a way to back up PostgreSQL databases across multiple servers without installing anything heavy. The requirements were simple:
- Back up every non-system database on each server
- Organize backups by server and database name
- Keep daily backups with configurable retention
- Create monthly snapshots on the 1st
- Drive the whole thing from a single JSON config file
Here's what I ended up with.
Requirements
bashjq— for parsing the JSON configurationpg_dump/psql— PostgreSQL client tools
The Config File
Create a config.json in the same directory as the script. It defines where to store backups, how long to keep them, and which servers to hit:
1{2 "backup_dir": "/backups/postgres",3 "retention_days": 7,4 "servers": [5 {6 "host": "db-primary.local",7 "user": "backup_user"8 },9 {10 "host": "db-replica.local",11 "user": "backup_user"12 }13 ]14}1{2 "backup_dir": "/backups/postgres",3 "retention_days": 7,4 "servers": [5 {6 "host": "db-primary.local",7 "user": "backup_user"8 },9 {10 "host": "db-replica.local",11 "user": "backup_user"12 }13 ]14}
Add as many servers as you need. The script iterates through all of them. Passwords are not stored here — they're managed via .pgpass (more on that below).
The Script
Create postgres_backup.sh:
1#!/bin/bash23set -e45# Check for jq dependency6if ! command -v jq &> /dev/null; then7 echo "jq is required but not installed."8 exit 19fi1011SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"12CONFIG_FILE="${SCRIPT_DIR}/config.json"1314read_config_and_check_dir() {15 if [ ! -f "$CONFIG_FILE" ]; then16 echo "Configuration file not found: $CONFIG_FILE"17 exit 118 fi1920 BACKUP_DIR=$(jq -r '.backup_dir' "$CONFIG_FILE")21 RETENTION_DAYS=$(jq -r '.retention_days' "$CONFIG_FILE")2223 if [ ! -d "$BACKUP_DIR" ]; then24 echo "Creating backup directory: $BACKUP_DIR"25 mkdir -p "$BACKUP_DIR"26 if [ $? -ne 0 ]; then27 echo "Failed to create backup directory."28 exit 129 fi30 fi3132 echo "Using backup directory: $BACKUP_DIR"33}3435create_backup() {36 local HOST=$137 local USER=$238 local DB=$339 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB"4041 mkdir -p "$BACKUP_PATH/daily"42 mkdir -p "$BACKUP_PATH/monthly"4344 echo "Backing up database: $DB"45 pg_dump -h $HOST -U $USER \46 -d "$DB" -F c -f "$BACKUP_PATH/daily/$DB-$DATE.dump"4748 # Monthly snapshot on the 1st49 if [ "$(date +%d)" = "01" ]; then50 cp "$BACKUP_PATH/daily/$DB-$DATE.dump" \51 "$BACKUP_PATH/monthly/$DB-$MONTH.dump"52 fi53}5455cleanup_old_backups() {56 local HOST=$157 local DB=$258 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB/daily"5960 find "$BACKUP_PATH" -name "*.dump" -type f \61 -mtime +$RETENTION_DAYS -delete62}6364list_databases() {65 local HOST=$166 local USER=$26768 psql -h "$HOST" -U "$USER" -d "postgres" \69 -t -c "SELECT datname FROM pg_database70 WHERE datistemplate = false71 AND datname != 'postgres';" \72 | grep -v '^$' | sed 's/^ *//g'73}7475main() {76 read_config_and_check_dir7778 DATE=$(date +"%Y-%m-%d")79 MONTH=$(date +"%Y-%m")8081 echo "Starting backup process..."8283 jq -c '.servers[]' "$CONFIG_FILE" | while read -r server; do84 HOST=$(echo $server | jq -r '.host')85 USER=$(echo $server | jq -r '.user')8687 echo "Backing up server: $HOST"8889 DATABASES=$(list_databases "$HOST" "$USER")9091 if [ -z "$DATABASES" ]; then92 echo "No databases found on $HOST. Skipping."93 continue94 fi9596 echo "Databases found:"97 echo "$DATABASES"9899 echo "$DATABASES" | while read -r DB; do100 create_backup "$HOST" "$USER" "$DB"101 cleanup_old_backups "$HOST" "$DB"102 done103 done104105 echo "Backup process completed."106}107108main1#!/bin/bash23set -e45# Check for jq dependency6if ! command -v jq &> /dev/null; then7 echo "jq is required but not installed."8 exit 19fi1011SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"12CONFIG_FILE="${SCRIPT_DIR}/config.json"1314read_config_and_check_dir() {15 if [ ! -f "$CONFIG_FILE" ]; then16 echo "Configuration file not found: $CONFIG_FILE"17 exit 118 fi1920 BACKUP_DIR=$(jq -r '.backup_dir' "$CONFIG_FILE")21 RETENTION_DAYS=$(jq -r '.retention_days' "$CONFIG_FILE")2223 if [ ! -d "$BACKUP_DIR" ]; then24 echo "Creating backup directory: $BACKUP_DIR"25 mkdir -p "$BACKUP_DIR"26 if [ $? -ne 0 ]; then27 echo "Failed to create backup directory."28 exit 129 fi30 fi3132 echo "Using backup directory: $BACKUP_DIR"33}3435create_backup() {36 local HOST=$137 local USER=$238 local DB=$339 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB"4041 mkdir -p "$BACKUP_PATH/daily"42 mkdir -p "$BACKUP_PATH/monthly"4344 echo "Backing up database: $DB"45 pg_dump -h $HOST -U $USER \46 -d "$DB" -F c -f "$BACKUP_PATH/daily/$DB-$DATE.dump"4748 # Monthly snapshot on the 1st49 if [ "$(date +%d)" = "01" ]; then50 cp "$BACKUP_PATH/daily/$DB-$DATE.dump" \51 "$BACKUP_PATH/monthly/$DB-$MONTH.dump"52 fi53}5455cleanup_old_backups() {56 local HOST=$157 local DB=$258 local BACKUP_PATH="$BACKUP_DIR/$HOST/$DB/daily"5960 find "$BACKUP_PATH" -name "*.dump" -type f \61 -mtime +$RETENTION_DAYS -delete62}6364list_databases() {65 local HOST=$166 local USER=$26768 psql -h "$HOST" -U "$USER" -d "postgres" \69 -t -c "SELECT datname FROM pg_database70 WHERE datistemplate = false71 AND datname != 'postgres';" \72 | grep -v '^$' | sed 's/^ *//g'73}7475main() {76 read_config_and_check_dir7778 DATE=$(date +"%Y-%m-%d")79 MONTH=$(date +"%Y-%m")8081 echo "Starting backup process..."8283 jq -c '.servers[]' "$CONFIG_FILE" | while read -r server; do84 HOST=$(echo $server | jq -r '.host')85 USER=$(echo $server | jq -r '.user')8687 echo "Backing up server: $HOST"8889 DATABASES=$(list_databases "$HOST" "$USER")9091 if [ -z "$DATABASES" ]; then92 echo "No databases found on $HOST. Skipping."93 continue94 fi9596 echo "Databases found:"97 echo "$DATABASES"9899 echo "$DATABASES" | while read -r DB; do100 create_backup "$HOST" "$USER" "$DB"101 cleanup_old_backups "$HOST" "$DB"102 done103 done104105 echo "Backup process completed."106}107108main
Setting It Up
1. Set up .pgpass for password-free authentication
Since the cron job runs as root, the file needs to live at /root/.pgpass:
1sudo touch /root/.pgpass2sudo chmod 600 /root/.pgpass3sudo nano /root/.pgpass1sudo touch /root/.pgpass2sudo chmod 600 /root/.pgpass3sudo nano /root/.pgpass
Add one line per server using the format hostname:port:database:username:password. Use * for the database field to match all databases on that server:
1localhost:5432:*:local_pg_user:your_password2db-replica.local:5432:*:backup_user:your_password1localhost:5432:*:local_pg_user:your_password2db-replica.local:5432:*:backup_user:your_password
2. Make the script executable and do a test run
1chmod +x postgres_backup.sh2sudo ./postgres_backup.sh1chmod +x postgres_backup.sh2sudo ./postgres_backup.sh
Directory Structure
After a few days of running, your backup directory looks like this:
1/backups/postgres/2├── db-primary.local/3│ ├── myapp/4│ │ ├── daily/5│ │ │ ├── myapp-2026-03-28.dump6│ │ │ ├── myapp-2026-03-29.dump7│ │ │ └── myapp-2026-03-30.dump8│ │ └── monthly/9│ │ └── myapp-2026-03.dump10│ └── analytics/11│ ├── daily/12│ └── monthly/13└── db-replica.local/14 └── ...1/backups/postgres/2├── db-primary.local/3│ ├── myapp/4│ │ ├── daily/5│ │ │ ├── myapp-2026-03-28.dump6│ │ │ ├── myapp-2026-03-29.dump7│ │ │ └── myapp-2026-03-30.dump8│ │ └── monthly/9│ │ └── myapp-2026-03.dump10│ └── analytics/11│ ├── daily/12│ └── monthly/13└── db-replica.local/14 └── ...
Each server gets its own directory, each database gets daily/ and monthly/ subdirectories, and retention cleanup only touches daily/.
Setting Up the Cron Job
Add it to root's crontab to run daily at 2 AM:
1sudo crontab -e1sudo crontab -e
10 2 * * * /path/to/postgres_backup.sh10 2 * * * /path/to/postgres_backup.sh
How It Works
- Reads
config.jsonfor backup directory, retention policy, and server list - Connects to each server and queries
pg_databasefor all non-system databases - Dumps each database using
pg_dumpin custom format (-F c) — the most flexible format for restores - Copies the daily dump to the monthly directory on the 1st of each month
- Cleans up daily dumps older than the configured retention period
Why Custom Format?
The -F c flag produces a custom-format archive that supports:
- Selective restore (individual tables, schemas)
- Parallel restore with
pg_restore -j - Compression built-in (no need to pipe through gzip)
Restoring a Backup
1# Full database restore2pg_restore -h localhost -U postgres -d myapp \3 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump45# Restore a single table6pg_restore -h localhost -U postgres -d myapp \7 -t users \8 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump1# Full database restore2pg_restore -h localhost -U postgres -d myapp \3 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump45# Restore a single table6pg_restore -h localhost -U postgres -d myapp \7 -t users \8 /backups/postgres/db-primary.local/myapp/daily/myapp-2026-03-30.dump
Limitations and Improvements
This is intentionally simple. A few things to consider:
- Credentials via
.pgpass— Much better than passwords in a config file, but make sure the file ischmod 600and owned by the user running the script. - No encryption at rest — The dumps sit unencrypted on disk. Pipe through
gpgor store them on an encrypted volume if needed. - No alerting — If the cron job fails silently, you won't know. Pipe output to a log file and monitor it, or wrap the script in something that sends notifications on failure.
- Sequential execution — Servers are backed up one at a time. For a large fleet, you could parallelize with
xargsor GNUparallel.
For my use case — a handful of servers with modest databases — this has been running reliably for over a year without any changes.