Setting Up Automatic Database Backups on Schedule
Data loss due to missing backups is one of the most expensive mistakes in web application operations. Configuring automatic scheduled backups is a basic requirement for any production server.
Choosing a tool
| DB | Tool | Format |
|---|---|---|
| PostgreSQL | pg_dump / pg_dumpall |
SQL / custom |
| MySQL/MariaDB | mysqldump / Percona XtraBackup |
SQL / binary |
| MongoDB | mongodump |
BSON |
| Redis | BGSAVE / AOF snapshot |
RDB / AOF |
| SQLite | sqlite3 .backup |
binary |
PostgreSQL: basic setup
#!/bin/bash
# /opt/scripts/backup-postgres.sh
BACKUP_DIR="/var/backups/postgres"
DB_NAME="myapp_production"
DATE=$(date +%Y%m%d_%H%M%S)
FILENAME="$BACKUP_DIR/${DB_NAME}_${DATE}.dump"
mkdir -p "$BACKUP_DIR"
pg_dump -U postgres -Fc "$DB_NAME" > "$FILENAME"
# Delete backups older than 7 days
find "$BACKUP_DIR" -name "*.dump" -mtime +7 -delete
# Check success
if [ $? -eq 0 ]; then
echo "Backup successful: $FILENAME"
else
echo "Backup FAILED" >&2
exit 1
fi
Cron task:
0 2 * * * /opt/scripts/backup-postgres.sh >> /var/log/backup.log 2>&1
MySQL: setup with password via .my.cnf
# ~/.my.cnf (permissions 600)
[mysqldump]
user=backup_user
password=secret_password
mysqldump --single-transaction --routines --triggers myapp_db | \
gzip > "/var/backups/mysql/myapp_$(date +%Y%m%d_%H%M%S).sql.gz"
--single-transaction is critical for InnoDB — creates consistent snapshot without table locking.
Rotation and storage
GFS (Grandfather-Father-Son) strategy:
- Daily — stored 7 days
- Weekly — stored 4 weeks
- Monthly — stored 12 months
Implementation via logrotate or script with find -mtime.
Upload to S3/object storage
aws s3 cp "$FILENAME" "s3://company-backups/postgres/${DB_NAME}/" \
--storage-class STANDARD_IA \
--server-side-encryption AES256
# Or via rclone for any provider
rclone copy "$FILENAME" remote:backups/postgres/
Automatic rotation in S3 is configured through Lifecycle Policy: move to Glacier after 30 days, delete after 1 year.
Error notifications
# Send notification to Slack on failure
if [ $? -ne 0 ]; then
curl -X POST "$SLACK_WEBHOOK" \
-d '{"text": "CRITICAL: Database backup failed on '"$(hostname)"'"}'
fi
Alternative — healthcheck via Healthchecks.io or Better Uptime: script pings URL after successful backup, service raises alert if ping doesn't arrive.
Backup verification
Backup without restore testing — not a backup. Weekly test:
# Restore to test DB
pg_restore -U postgres -d test_restore --clean "$FILENAME"
# Check record count
psql -U postgres -d test_restore -c "SELECT COUNT(*) FROM users;"
Timeline
Setting up backup of one DB with rotation and S3 upload — 1 business day.







