n8n Self-Hosted with Docker:
Complete Installation Guide 2026

Running n8n in the cloud through n8n.cloud is convenient, but self-hosting gives you full control: unlimited workflows, custom integrations, no per-execution pricing, and complete data sovereignty. Docker is the cleanest, most reproducible way to do it. This guide walks you through everything from a fresh server to a production-ready n8n instance with PostgreSQL, Nginx, SSL, and automated backups.

Prerequisites

A Linux VPS with at least 2 vCPU, 2 GB RAM (4 GB recommended), Docker 24+ and Docker Compose v2 installed, a domain name pointed to your server, and root or sudo access. Ubuntu 22.04 LTS or Debian 12 are the most battle-tested choices.

Why Self-Host n8n?

Before diving into commands, it is worth understanding what you gain—and what you take on—by running your own n8n instance.

Factorn8n CloudSelf-Hosted
Monthly cost (heavy use)$50–$500+$5–$30 VPS
Workflow limitPlan-basedUnlimited
Execution history7–30 daysForever
Custom nodesRestrictedFull access
Data residencyn8n serversYour server
Maintenance burdenNoneYours
Uptime SLA99.9%DIY

For teams processing sensitive data, running high-volume automations, or needing custom community nodes, self-hosting is the obvious choice. The operational overhead is manageable once the initial setup is done correctly.

Step 1 — Server Preparation

Start with a clean server. Update the system and install Docker using the official convenience script:

# Update system packages sudo apt-get update && sudo apt-get upgrade -y # Install Docker Engine via official script curl -fsSL https://get.docker.com | sudo sh # Add your user to the docker group (avoids needing sudo for docker) sudo usermod -aG docker $USER newgrp docker # Verify installation docker --version # Docker version 26.x.x docker compose version # Docker Compose version v2.x.x

Create the project directory structure

Keep everything organized under a single directory. This makes backups and migrations trivial:

sudo mkdir -p /opt/n8n/{data,postgres,backups} sudo chown -R $USER:$USER /opt/n8n cd /opt/n8n

Step 2 — Environment Variables Configuration

Never hardcode secrets in your docker-compose.yml. Create a .env file that Docker Compose reads automatically:

# /opt/n8n/.env # ── n8n Core ───────────────────────────────────────── N8N_HOST=n8n.yourdomain.com N8N_PORT=5678 N8N_PROTOCOL=https WEBHOOK_URL=https://n8n.yourdomain.com/ # ── Security ────────────────────────────────────────── N8N_ENCRYPTION_KEY=your-32-char-random-string-here N8N_BASIC_AUTH_ACTIVE=true N8N_BASIC_AUTH_USER=admin N8N_BASIC_AUTH_PASSWORD=your-strong-password-here # ── Database (PostgreSQL) ───────────────────────────── DB_TYPE=postgresdb DB_POSTGRESDB_HOST=postgres DB_POSTGRESDB_PORT=5432 DB_POSTGRESDB_DATABASE=n8n DB_POSTGRESDB_USER=n8n DB_POSTGRESDB_PASSWORD=your-db-password-here DB_POSTGRESDB_SCHEMA=public # ── PostgreSQL container ────────────────────────────── POSTGRES_DB=n8n POSTGRES_USER=n8n POSTGRES_PASSWORD=your-db-password-here # ── Execution settings ──────────────────────────────── EXECUTIONS_MODE=regular EXECUTIONS_TIMEOUT=3600 EXECUTIONS_DATA_SAVE_ON_ERROR=all EXECUTIONS_DATA_SAVE_ON_SUCCESS=all EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=true # ── Timezone ────────────────────────────────────────── GENERIC_TIMEZONE=America/New_York TZ=America/New_York
Security Warning

Generate N8N_ENCRYPTION_KEY with openssl rand -hex 16. This key encrypts stored credentials — never lose it and never change it once workflows are running, or all saved credentials become unrecoverable.

Step 3 — docker-compose.yml

This is the complete production-ready Compose file. It includes n8n, PostgreSQL with health checks, and a shared network:

# /opt/n8n/docker-compose.yml version: '3.8' services: postgres: image: postgres:16-alpine restart: unless-stopped environment: POSTGRES_DB: ${POSTGRES_DB} POSTGRES_USER: ${POSTGRES_USER} POSTGRES_PASSWORD: ${POSTGRES_PASSWORD} volumes: - ./postgres:/var/lib/postgresql/data healthcheck: test: ["CMD-SHELL", "pg_isready -U ${POSTGRES_USER} -d ${POSTGRES_DB}"] interval: 10s timeout: 5s retries: 5 networks: - n8n-network n8n: image: n8nio/n8n:latest restart: unless-stopped ports: - "127.0.0.1:5678:5678" environment: - N8N_HOST=${N8N_HOST} - N8N_PORT=${N8N_PORT} - N8N_PROTOCOL=${N8N_PROTOCOL} - WEBHOOK_URL=${WEBHOOK_URL} - N8N_ENCRYPTION_KEY=${N8N_ENCRYPTION_KEY} - N8N_BASIC_AUTH_ACTIVE=${N8N_BASIC_AUTH_ACTIVE} - N8N_BASIC_AUTH_USER=${N8N_BASIC_AUTH_USER} - N8N_BASIC_AUTH_PASSWORD=${N8N_BASIC_AUTH_PASSWORD} - DB_TYPE=${DB_TYPE} - DB_POSTGRESDB_HOST=${DB_POSTGRESDB_HOST} - DB_POSTGRESDB_PORT=${DB_POSTGRESDB_PORT} - DB_POSTGRESDB_DATABASE=${DB_POSTGRESDB_DATABASE} - DB_POSTGRESDB_USER=${DB_POSTGRESDB_USER} - DB_POSTGRESDB_PASSWORD=${DB_POSTGRESDB_PASSWORD} - DB_POSTGRESDB_SCHEMA=${DB_POSTGRESDB_SCHEMA} - EXECUTIONS_MODE=${EXECUTIONS_MODE} - EXECUTIONS_TIMEOUT=${EXECUTIONS_TIMEOUT} - EXECUTIONS_DATA_SAVE_ON_ERROR=${EXECUTIONS_DATA_SAVE_ON_ERROR} - EXECUTIONS_DATA_SAVE_ON_SUCCESS=${EXECUTIONS_DATA_SAVE_ON_SUCCESS} - GENERIC_TIMEZONE=${GENERIC_TIMEZONE} - TZ=${TZ} volumes: - ./data:/home/node/.n8n depends_on: postgres: condition: service_healthy networks: - n8n-network networks: n8n-network: driver: bridge
Pro Tip: Pin your n8n version

Replace n8nio/n8n:latest with a specific version like n8nio/n8n:1.82.0 in production. This prevents surprise breaking changes when n8n auto-pulls a new image on container restart.

Launch the stack

cd /opt/n8n # Start in detached mode docker compose up -d # Watch logs until n8n is ready docker compose logs -f n8n # Verify both containers are healthy docker compose ps

n8n will now be accessible at http://localhost:5678 — but only from the server itself. The next step exposes it securely to the internet via Nginx.

Step 4 — Nginx Reverse Proxy

Nginx sits in front of n8n, handling TLS termination and forwarding traffic to the container. Install it and create the site configuration:

sudo apt-get install -y nginx
# /etc/nginx/sites-available/n8n server { listen 80; server_name n8n.yourdomain.com; # Redirect all HTTP to HTTPS return 301 https://$host$request_uri; } server { listen 443 ssl http2; server_name n8n.yourdomain.com; # SSL certificates (filled in by Certbot) ssl_certificate /etc/letsencrypt/live/n8n.yourdomain.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/n8n.yourdomain.com/privkey.pem; include /etc/letsencrypt/options-ssl-nginx.conf; ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # Security headers add_header X-Frame-Options "SAMEORIGIN" always; add_header X-Content-Type-Options "nosniff" always; add_header Referrer-Policy "no-referrer-when-downgrade" always; add_header X-XSS-Protection "1; mode=block" always; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains" always; # Increase upload size (for workflow imports) client_max_body_size 64M; location / { proxy_pass http://127.0.0.1:5678; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_cache_bypass $http_upgrade; # Required for n8n webhook long-polling proxy_read_timeout 3600s; proxy_buffering off; } }
# Enable the site sudo ln -s /etc/nginx/sites-available/n8n /etc/nginx/sites-enabled/ sudo nginx -t sudo systemctl reload nginx

Step 5 — SSL with Let's Encrypt

Certbot automates SSL certificate issuance and renewal from Let's Encrypt for free:

# Install Certbot with Nginx plugin sudo apt-get install -y certbot python3-certbot-nginx # Obtain and install certificate (replaces the placeholder SSL lines) sudo certbot --nginx -d n8n.yourdomain.com # Test auto-renewal sudo certbot renew --dry-run # Certbot creates a systemd timer for renewal — verify it's active sudo systemctl status certbot.timer

After this step, navigate to https://n8n.yourdomain.com in your browser. You should see the n8n login screen secured with a valid certificate. Enter the credentials you set in N8N_BASIC_AUTH_USER and N8N_BASIC_AUTH_PASSWORD.

Step 6 — Backup Strategies

Data loss is not an option in production. Implement both database and volume backups:

PostgreSQL dumps (recommended)

# /opt/n8n/backup.sh #!/bin/bash set -euo pipefail BACKUP_DIR=/opt/n8n/backups TIMESTAMP=$(date +%Y%m%d_%H%M%S) BACKUP_FILE="$BACKUP_DIR/n8n_db_$TIMESTAMP.sql.gz" # Dump and compress in a single pipeline docker compose -f /opt/n8n/docker-compose.yml exec -T postgres \ pg_dump -U n8n n8n | gzip > "$BACKUP_FILE" # Delete backups older than 30 days find "$BACKUP_DIR" -name "*.sql.gz" -mtime +30 -delete echo "Backup completed: $BACKUP_FILE"
# Make executable and schedule daily at 2 AM chmod +x /opt/n8n/backup.sh # Add to crontab echo "0 2 * * * /opt/n8n/backup.sh >> /var/log/n8n-backup.log 2>&1" | sudo crontab -

Workflow export backup

Also export your workflows in JSON format using the n8n CLI — this gives you human-readable version-controllable backups:

# Export all workflows to JSON files docker compose exec n8n n8n export:workflow --all --output=/home/node/.n8n/backups/ # Export credentials (encrypted) docker compose exec n8n n8n export:credentials --all --output=/home/node/.n8n/backups/

Step 7 — Updating n8n

Keeping n8n up to date is a three-command operation when using Docker:

cd /opt/n8n # 1. Take a backup first ./backup.sh # 2. Pull the new image docker compose pull n8n # 3. Recreate the container with the new image (zero-downtime for short updates) docker compose up -d --no-deps n8n # Verify the new version docker compose exec n8n n8n --version
Read Release Notes First

Always check the n8n changelog before major version updates. Some releases include database migrations that are irreversible — downgrading after such migrations requires restoring from backup.

Step 8 — Common Errors and Solutions

Error: "ECONNREFUSED 127.0.0.1:5432"

n8n can't reach PostgreSQL. This usually means the postgres container isn't ready yet or the network name is wrong. Check:

# Verify postgres is healthy docker compose ps postgres # The DB_POSTGRESDB_HOST must match the service name in docker-compose.yml # Correct: DB_POSTGRESDB_HOST=postgres (not localhost or 127.0.0.1)

Error: "Webhook not working / 502 Bad Gateway"

Nginx can't reach n8n. Ensure n8n is binding to 127.0.0.1:5678 and the Nginx config proxies to the same address. Also check that WEBHOOK_URL ends with a trailing slash.

Error: "Credentials are not valid anymore"

This means the N8N_ENCRYPTION_KEY was changed. Restore the original key from your .env backup, or re-enter all credentials manually.

Error: "Container keeps restarting"

# Inspect logs from the last failed run docker compose logs --tail=100 n8n # Common causes: wrong DB password, missing env vars, port conflict docker compose config # validates the compose file and shows resolved values

Error: "No space left on device"

Execution history accumulates in PostgreSQL. Prune old executions via the n8n UI under Settings → Executions, or run:

# Prune execution data older than 90 days (adjust as needed) docker compose exec n8n n8n executionData:prune --deleteDataOlderThan 90

Step 9 — Security Hardening

A default installation is functional but not hardened. Apply these settings before going live:

Firewall (UFW)

# Allow only SSH, HTTP, HTTPS. Block everything else including port 5678 sudo ufw default deny incoming sudo ufw allow ssh sudo ufw allow 80/tcp sudo ufw allow 443/tcp sudo ufw enable # NEVER expose port 5678 directly to the internet — use Nginx only

Fail2Ban for brute-force protection

sudo apt-get install -y fail2ban # /etc/fail2ban/jail.d/nginx-http-auth.conf [nginx-http-auth] enabled = true port = http,https filter = nginx-http-auth logpath = /var/log/nginx/error.log maxretry = 5 bantime = 3600

Additional n8n security environment variables

# Add to your .env file # Disable the public API if not needed N8N_PUBLIC_API_DISABLED=true # Restrict which domains webhooks can redirect to ALLOWED_EXTERNAL_DOMAINS=yourdomain.com,api.trusted-service.com # Disable community nodes if not needed (reduces attack surface) N8N_COMMUNITY_PACKAGES_ALLOW_LIST= # Set a specific user for the node process NODE_FUNCTION_ALLOW_BUILTIN=path,fs NODE_FUNCTION_ALLOW_EXTERNAL=lodash,moment
Want to skip all this setup?

Use Scriflow to generate your n8n workflows with AI and import them into your self-hosted instance. You handle the hosting, Scriflow handles the workflow creation.

Bonus: Basic Monitoring

Know when n8n goes down before your users do:

# Simple healthcheck script — add to crontab every 5 minutes #!/bin/bash RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" https://n8n.yourdomain.com/healthz) if [ "$RESPONSE" != "200" ]; then echo "n8n is DOWN (HTTP $RESPONSE)" | mail -s "n8n Alert" admin@yourdomain.com fi

For more robust monitoring, consider integrating Uptime Kuma (also deployable via Docker) or using an external service like BetterUptime or UptimeRobot.

Frequently Asked Questions

Can I run n8n with SQLite instead of PostgreSQL?
Yes — SQLite is the default. Simply omit all DB_* environment variables and the postgres service from your Compose file. However, SQLite does not support concurrent access, so it is only suitable for single-user or low-volume setups. PostgreSQL is strongly recommended for production.
How do I migrate from SQLite to PostgreSQL later?
n8n does not have a built-in migration tool for this path. The recommended approach is to export all workflows and credentials as JSON, set up fresh PostgreSQL instance, then re-import them. Execution history cannot be migrated and will be lost.
What's the minimum server spec for n8n?
For personal use, a $6/month VPS with 1 vCPU and 1 GB RAM runs n8n fine with SQLite. For production with PostgreSQL and 10+ concurrent workflows, use at least 2 vCPU and 2 GB RAM. The n8n process itself uses ~200–400 MB at rest.
Does this setup support n8n Queue Mode?
Not out of the box. Queue mode requires Redis and a separate worker container. It is designed for high-volume setups processing thousands of executions per hour. For most self-hosters, regular mode with PostgreSQL is sufficient.
Can I use Apache instead of Nginx?
Yes. Enable mod_proxy, mod_proxy_http, and mod_proxy_wstunnel. The key requirement is WebSocket support for the n8n editor live preview — without it, the UI may not update in real time. Nginx is simpler to configure correctly for this use case.