Skip to main content
The Dreadnode platform includes containerized Postgres and ClickHouse databases by default. For production environments, you may wish to connect to external managed databases or implement backup strategies.

Remote Databases

You can configure the platform to use external database servers by updating the persistent environment configuration.
Data Migration RequiredSwitching to a remote database will start with a blank schema. If you have existing data in the local containerized database, you must Backup and Restore it to the new remote location manually. The platform does not automatically migrate data.
To use a remote Postgres database (e.g., AWS RDS, Google Cloud SQL):
dreadnode platform configure \
  DATABASE_HOST db.example.com \
  DATABASE_PORT 5432 \
  DATABASE_USER myuser \
  DATABASE_PASSWORD secure-password \
  DATABASE_NAME platform
Requirement: Ensure your Postgres server allows connections from the platform host.

Backup & Restore

Logical backups (pg_dump, ClickHouse BACKUP) are recommended over volume snapshots for consistency.
Run pg_dump from a temporary container against your database:
docker run --rm \
  -e PGPASSWORD=$DATABASE_PASSWORD \
  postgres:16 \
  pg_dump -h $DATABASE_HOST -p 5432 \
  -U $DATABASE_USER -d $DATABASE_NAME \
  -Fc -f - > postgres_$(date +%Y%m%d).dump
For ClickHouse, use the native BACKUP command to S3 or compatible storage:
BACKUP DATABASE platform TO S3(
  's3://my-bucket/backups/platform/{timestamp}',
  'ACCESS_KEY', 'SECRET_KEY'
);

Air-Gapped Deployment

For environments without internet access, you must manually transfer the Docker images and configuration templates.

1. Prepare (On an Online Machine)

  1. Install the SDK: pip install dreadnode
  2. Authenticate: dreadnode login
  3. Download Templates:
    dreadnode platform download --tag latest-amd64
    
  4. Pull Images: Manually pull the images listed in ~/.dreadnode/platform/<tag>/docker-compose.yaml.
    docker pull dreadnode/platform-api:latest
    docker pull dreadnode/platform-ui:latest
    # ... pull other dependent images (postgres, clickhouse, etc)
    
  5. Save Images:
    docker save dreadnode/platform-api:latest | gzip > platform-api.tar.gz
    docker save dreadnode/platform-ui:latest | gzip > platform-ui.tar.gz
    # ... save others
    

2. Transfer & Load (On the Air-Gapped Machine)

  1. Transfer the ~/.dreadnode/platform directory and the .tar.gz image files to the target machine.
  2. Load the images:
    docker load < platform-api.tar.gz
    docker load < platform-ui.tar.gz
    # ...
    
  3. Start the platform (it will detect the existing templates and images):
    dreadnode platform start
    

Hybrid Deployment Example

For a resilient production deployment, a hybrid approach is often best:
  • Compute: Run the Dreadnode API & UI containers on your own compute instances.
  • Data: Connect to managed cloud databases (RDS, ClickHouse Cloud).
  • Artifacts: Store large artifacts in S3 (configure S3_AWS_EXTERNAL_ENDPOINT_URL).