Database · 2025

Database Backup Strategies for Production Apps: The Complete Guide

Updated April 2025 · 12 min read

Automated daily backups, S3 storage, restore procedures, and verification checklists for MySQL, PostgreSQL, and MongoDB.

HomeBlog › Database Backup Strategies for Production Apps: The Complete Guide

Database Backup Strategies for Production Apps: The Complete Guide

You don't need a database backup strategy until the day you desperately need one. By then, it's too late to build one.

Database disasters happen. A developer runs a migration with a bug. A DELETE statement forgets its WHERE clause. A deployment goes wrong and corrupts data. A malicious actor gains access. In every scenario, the distance between "catastrophic" and "annoying but recoverable" is the quality of your last good backup.

This guide covers database backup strategies for production applications: what to back up, how often, how to automate it, how to verify the backups actually work, and how to restore when the inevitable happens.

The Three Backup Strategies

1. Full Database Dumps (Baseline Strategy)

A full dump exports the complete database — all tables, all data, all schema — to a single file. It's the simplest form of backup and the foundation of any backup strategy.

MySQL / MariaDB:

# Full dump
mysqldump -h hostname -u username -p database_name > backup_$(date +%Y%m%d_%H%M%S).sql

# Compressed dump (much smaller file)
mysqldump -h hostname -u username -p database_name | gzip > backup_$(date +%Y%m%d_%H%M%S).sql.gz

# With stored procedures and triggers
mysqldump --routines --triggers -h hostname -u username -p database_name | gzip > backup.sql.gz

# Tables only, no data (schema backup)
mysqldump --no-data -h hostname -u username -p database_name > schema.sql

PostgreSQL:

# Full dump (custom format — faster, supports parallel restore)
pg_dump -h hostname -U username -Fc database_name > backup_$(date +%Y%m%d).dump

# Plain SQL format
pg_dump -h hostname -U username database_name > backup.sql

# Compressed SQL
pg_dump -h hostname -U username database_name | gzip > backup.sql.gz

# All databases
pg_dumpall -h hostname -U postgres > all_databases.sql

MongoDB:

# Full dump
mongodump --uri="mongodb://username:password@hostname:27017/database" --out=./backup

# Archive format (single file)
mongodump --uri="mongodb://username:password@hostname:27017/database" --archive=backup.archive --gzip

Redis (snapshot):

# Redis snapshots (RDB files) are configured in redis.conf
# For an immediate snapshot:
redis-cli -h hostname -a password BGSAVE

# Copy the RDB file:
scp hostname:/var/lib/redis/dump.rdb ./backup_$(date +%Y%m%d).rdb

2. ApexWeave CLI Backup (Simplest for ApexWeave Users)

# Download a dump of your database with one command
apexweave db:dump mydb.apexweaveapp.com

# Saves as: mydb_dump_YYYYMMDD_HHMMSS.sql (MySQL), .dump (PostgreSQL), .archive (MongoDB)

# Include DROP statements for clean restore
apexweave db:dump mydb.apexweaveapp.com --clean --output ~/backups/mydb_$(date +%Y%m%d).sql.gz

# Specify output path
apexweave db:dump mydb.apexweaveapp.com --output ~/backups/production_backup.sql.gz

3. Continuous Replication (Advanced, High-Availability)

For databases where even minutes of data loss is unacceptable (financial transactions, healthcare records, real-time inventory):

  • MySQL/MariaDB: Binary log replication to a standby replica
  • PostgreSQL: Streaming replication with WAL (Write-Ahead Log) archiving
  • MongoDB: Replica sets

This is beyond the scope of most small-to-medium applications. For most apps, daily automated dumps to cloud storage (S3) with point-in-time recovery capability is sufficient.

Backup Schedule: How Often Is Enough?

The answer depends on your RPO (Recovery Point Objective) — how much data loss is acceptable.

App Type Acceptable Data Loss Backup Frequency
Personal blog Up to 24 hours Daily
Business site Up to 8 hours 3x daily
E-commerce store Up to 1 hour Hourly
SaaS with user data Up to 15 minutes Every 15 minutes
Financial/medical Near-zero Continuous replication

For most web apps, daily automated backups with 30-day retention is the minimum viable strategy.

Automated Backup to S3 (Set Once, Forget Forever)

This setup runs automatically every day, uploads to S3, and deletes old backups after 30 days.

Prerequisites

  • AWS S3 bucket created
  • AWS IAM user with S3 read/write permissions for that bucket
  • A machine that runs continuously (your hosting server, or a cron container)

Script: backup.sh

#!/bin/bash

# Configuration — set these as environment variables
DB_HOST="${DB_HOST}"
DB_USER="${DB_USER}"
DB_PASSWORD="${DB_PASSWORD}"
DB_NAME="${DB_NAME}"
S3_BUCKET="${S3_BUCKET}"
RETENTION_DAYS=30

# Generate filename
TIMESTAMP=$(date +%Y%m%d_%H%M%S)
BACKUP_FILE="/tmp/backup_${DB_NAME}_${TIMESTAMP}.sql.gz"

# Create dump
echo "[$(date)] Starting backup of ${DB_NAME}"
mysqldump -h "${DB_HOST}" -u "${DB_USER}" -p"${DB_PASSWORD}" \
  --single-transaction \
  --routines \
  --triggers \
  "${DB_NAME}" | gzip > "${BACKUP_FILE}"

if [ $? -ne 0 ]; then
  echo "[$(date)] ERROR: Backup failed"
  exit 1
fi

# Upload to S3
aws s3 cp "${BACKUP_FILE}" "s3://${S3_BUCKET}/backups/${BACKUP_FILE##*/}"

if [ $? -eq 0 ]; then
  echo "[$(date)] Backup uploaded: ${BACKUP_FILE##*/}"
  rm "${BACKUP_FILE}"
else
  echo "[$(date)] ERROR: S3 upload failed"
  exit 1
fi

# Delete backups older than RETENTION_DAYS
aws s3 ls "s3://${S3_BUCKET}/backups/" | \
  awk '{print $4}' | \
  while read file; do
    file_date=$(echo "$file" | grep -o '[0-9]\{8\}')
    if [ -n "$file_date" ]; then
      cutoff=$(date -d "-${RETENTION_DAYS} days" +%Y%m%d)
      if [ "$file_date" -lt "$cutoff" ]; then
        aws s3 rm "s3://${S3_BUCKET}/backups/$file"
        echo "[$(date)] Deleted old backup: $file"
      fi
    fi
  done

echo "[$(date)] Backup process complete"

Schedule with Cron

Run as a post-deployment hook or on a cron schedule:

# Edit crontab
crontab -e

# Run backup daily at 2:00 AM
0 2 * * * /opt/backup.sh >> /var/log/backup.log 2>&1

# Run hourly for high-value data
0 * * * * /opt/backup.sh >> /var/log/backup.log 2>&1

Using ApexWeave CLI for On-Demand Backups

# Manual backup before a risky migration
apexweave db:dump mydb.apexweaveapp.com --output ~/backups/pre-migration-$(date +%Y%m%d).sql.gz

# Automated via cron using the CLI
0 2 * * * apexweave db:dump mydb.apexweaveapp.com --output ~/backups/backup-$(date +%Y%m%d).sql.gz

S3 Bucket Configuration for Backups

Create a dedicated S3 bucket for backups with proper security:

Bucket policy — allow only your backup IAM user:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {"AWS": "arn:aws:iam::ACCOUNT_ID:user/backup-user"},
      "Action": ["s3:PutObject", "s3:GetObject", "s3:ListBucket"],
      "Resource": [
        "arn:aws:s3:::your-backup-bucket",
        "arn:aws:s3:::your-backup-bucket/*"
      ]
    }
  ]
}

Enable S3 versioning: Protects against accidental backup overwrites.

Enable S3 lifecycle rules:

{
  "Rules": [
    {
      "ID": "DeleteOldBackups",
      "Status": "Enabled",
      "Filter": {"Prefix": "backups/"},
      "Expiration": {"Days": 30},
      "Transitions": [
        {
          "Days": 7,
          "StorageClass": "STANDARD_IA"  // Cheaper storage for older backups
        }
      ]
    }
  ]
}

Verifying Backups Actually Work

A backup that can't be restored is not a backup — it's a false sense of security.

Test your restore process monthly:

# Create a test database
mysql -u root -p -e "CREATE DATABASE test_restore;"

# Restore to test database
gunzip -c backup_20250415_020000.sql.gz | mysql -u root -p test_restore

# Verify data
mysql -u root -p test_restore -e "SELECT COUNT(*) FROM users;"
mysql -u root -p test_restore -e "SELECT * FROM users LIMIT 5;"

# Compare with production
mysql -u root -p production_db -e "SELECT COUNT(*) FROM users;"

# Clean up test
mysql -u root -p -e "DROP DATABASE test_restore;"

For PostgreSQL:

# Create test database
createdb -U postgres test_restore

# Restore (custom format)
pg_restore -U postgres -d test_restore backup.dump

# Restore (SQL format)
psql -U postgres test_restore < backup.sql

# Verify
psql -U postgres test_restore -c "SELECT COUNT(*) FROM users;"

# Clean up
dropdb -U postgres test_restore

If this restore process succeeds and data looks correct, your backup is verified. If it fails, you know before disaster strikes.

Restore Procedures: When Things Go Wrong

Scenario 1: Accidental Data Deletion

# Step 1: Stop your app immediately (prevent more writes)
apexweave run "exit 1" myapp.com  # Or stop the app from dashboard

# Step 2: Get the last good backup
apexweave db:dump mydb.apexweaveapp.com --output /tmp/current-state.sql.gz
# (Save current state for forensics)

# Step 3: Restore from backup
apexweave db:import mydb.apexweaveapp.com --file ~/backups/backup_20250415.sql.gz

# Step 4: Restart app
# Dashboard → Deploy button

Scenario 2: Bad Migration (Data Corruption)

# Restore from pre-migration backup
apexweave db:import mydb.apexweaveapp.com --file ~/backups/pre-migration-20250415.sql.gz

# OR roll back to previous code deployment
# Dashboard → Logs tab → previous deployment → Rollback

# The rollback restores code; the db:import restores data

Scenario 3: Full Database Recovery from S3

# Download latest backup from S3
aws s3 cp s3://your-backup-bucket/backups/backup_20250415_020000.sql.gz ./

# Import to ApexWeave database
apexweave db:import mydb.apexweaveapp.com --file backup_20250415_020000.sql.gz

# Or import from URL directly
apexweave db:import mydb.apexweaveapp.com --url https://your-backup-bucket.s3.amazonaws.com/backups/backup.sql.gz

MySQL-Specific: Logical vs Physical Backups

Logical Backups (mysqldump)

  • Output: SQL statements (INSERT INTO, CREATE TABLE)
  • Portable: Works across MySQL versions, can restore to different server
  • Slower to create and restore on large databases
  • Can be inspected and edited with a text editor
  • Best for: Most web apps under ~50GB

Physical Backups (mysqlhotcopy, Percona XtraBackup)

  • Output: Raw database files (ibdata, .ibd files)
  • Faster for large databases
  • Not portable across MySQL versions
  • Not human-readable
  • Best for: Large databases where dump/restore time matters

For most web apps (databases under 20GB), mysqldump is the right tool.

PostgreSQL-Specific: pg_dump Formats

PostgreSQL's pg_dump supports four output formats:

Format Flag Use Case
Plain SQL (default) Human-readable, portable
Custom -Fc Most flexible, supports selective restore
Directory -Fd Parallel dump for large databases
Tar -Ft Simple archive format

Recommended for most apps: Custom format (-Fc)

pg_dump -Fc -h hostname -U username database > backup.dump
pg_restore -h hostname -U username -d database backup.dump

Custom format supports:
- Parallel restore (faster): pg_restore -j 4 ...
- Restore specific tables: pg_restore -t users backup.dump
- Restore to different database name
- List contents without restoring: pg_restore -l backup.dump

What to Include in Each Backup

Not everything needs to be in every backup:

Data Type Backup Frequency Notes
User data (accounts, orders) Daily minimum Critical — highest priority
Content (posts, products) Daily Important
Sessions table Exclude Ephemeral — users re-login
Job queue table Include (with caution) May have in-flight jobs
Cache tables Exclude Regenerated automatically
Audit/log tables Weekly or archive Retain but lower priority

Excluding sessions and cache reduces dump size significantly and restore time dramatically.

For MySQL:

# Exclude specific tables
mysqldump -h hostname -u username -p database \
  --ignore-table=database.sessions \
  --ignore-table=database.cache \
  | gzip > backup.sql.gz

The Backup Checklist

Print this and check it quarterly:

  • [ ] Backups are automated (not manual)
  • [ ] Backups run at least daily for production data
  • [ ] Backups are stored off-server (S3, separate location)
  • [ ] Backup files are compressed (gzip/brotli)
  • [ ] Retention policy is configured (old backups are automatically deleted)
  • [ ] Last restore test was performed within 30 days
  • [ ] Restore procedure is documented and someone other than you can follow it
  • [ ] Backup failure alerts are configured (email/Slack if backup script fails)
  • [ ] Backup bucket has versioning enabled
  • [ ] Backup credentials are stored separately from application credentials

Manage your production databases with CLI backup tools at apexweave.com/git-deployment.phpapexweave db:dump and apexweave db:import included for MySQL, PostgreSQL, and MongoDB.

Deploy Your App with Git Push

Automatic builds, environment variables, live logs, rollback, and custom domains. No server management required.

Deploy Free — No Card Required

Powered by WHMCompleteSolution