I wanted a cheap place to run some automation scripts and host a few web apps. After an afternoon of configuration, I have Postgres, Prefect, and secure HTTPS access running on a $7/month ARM server.
Table of contents
Open Table of contents
The Goal
Host a shared Postgres database, Prefect for workflow orchestration, and eventually some web applications—all on a single VPS. Keep costs low since these are hobby projects. Automate backups to Dropbox.
The Stack
| Component | Choice | Monthly Cost |
|---|---|---|
| VPS | Hetzner CAX21 (4 vCPU, 8GB RAM, ARM64) | ~$7 |
| Database | Postgres 18 (multiple databases, one instance) | — |
| Orchestration | Prefect 3 | — |
| HTTPS | Cloudflare Tunnel (Zero Trust Free) | $0 |
| Backups | pg_dump + rclone to Dropbox | $0 |
Total: roughly $7.60/month including the IPv4 address.
Why Hetzner ARM
I initially looked at US-based servers for lower latency, but Hetzner’s ARM instances (CAX series) offer better specs per dollar. The CAX21 gives you 8GB RAM for about $7—compared to 4GB for $10 on their AMD instances.
The tradeoff: my server is in Nuremberg, so SSH has about 100ms latency from Nashville. Not noticeable for background tasks; slightly annoying when typing long commands.
The Docker Compose Setup
Everything runs in Docker. The core services:
services:
db:
image: postgres:18
restart: unless-stopped
environment:
POSTGRES_USER: ${DB_USER:-admin}
POSTGRES_PASSWORD: ${DB_PASSWORD:?required}
POSTGRES_MULTIPLE_DATABASES: prefect,webapp
volumes:
- ./postgres_data:/var/lib/postgresql
- ./postgres/init-databases.sh:/docker-entrypoint-initdb.d/init-databases.sh:ro
healthcheck:
test: ["CMD-SHELL", "pg_isready -U ${DB_USER:-admin}"]
interval: 10s
timeout: 5s
retries: 5
prefect:
image: prefecthq/prefect:3-latest
restart: unless-stopped
entrypoint: ["prefect", "server", "start"]
environment:
PREFECT_API_DATABASE_CONNECTION_URL: postgresql+asyncpg://${DB_USER}:${DB_PASSWORD}@db:5432/prefect
PREFECT_SERVER_API_HOST: 0.0.0.0
PREFECT_UI_API_URL: https://prefect.yourdomain.com/api
depends_on:
db:
condition: service_healthy
tunnel:
image: cloudflare/cloudflared:latest
restart: unless-stopped
command: tunnel --no-autoupdate run --token ${CLOUDFLARE_TUNNEL_TOKEN}
Key decisions:
- Multiple databases, one Postgres instance: A simple init script creates separate databases for Prefect and each web app.
- No exposed ports: The tunnel container handles all external traffic. The only port open on the firewall is my custom SSH port.
- Prefect 3: Requires the
pg_trgmextension for search—the init script enables this automatically.
Cloudflare Tunnels
The setup that took the longest to understand. The mental model:
- The Tunnel is an outbound connection from your server to Cloudflare. You don’t open any inbound ports.
- Public Hostname routes tell the tunnel which internal Docker service handles which domain.
- Cloudflare Access puts a login gate in front of your apps.
The routing config in the Cloudflare dashboard:
- Hostname:
prefect.yourdomain.com - Service Type: HTTP
- URL:
prefect:4200(Docker service name, not localhost)
That last part tripped me up. Since the tunnel container runs in the same Docker network as Prefect, you use the service name from docker-compose.yml.
The Security Layers
After setting everything up, here’s how a request flows:
Browser → Cloudflare DNS → Access Login (email OTP)
→ Tunnel → Docker Network → Prefect Container
And for SSH:
Terminal → Custom Port → UFW Firewall → SSH Daemon
These are completely separate paths. Someone scanning the internet sees nothing on ports 80/443/22—the tunnel is outbound-only, and SSH is on a non-standard port.
What Actually Took Time
DNS Caching: After adding the CNAME record, my router cached the old “not found” result. Had to point my Mac directly at 1.1.1.1 to bypass the stale cache.
Ubuntu 24.04 SSH sockets: Changing the SSH port requires editing both /etc/ssh/sshd_config and creating a systemd socket override. Just editing the config file doesn’t work anymore.
Prefect UI API URL: The PREFECT_UI_API_URL environment variable needs to be your public HTTPS URL, not localhost. The browser makes API calls, so it needs to know how to reach the server from outside.
Backup Strategy
A shell script dumps all databases using pg_dump -Fc (compressed custom format), then syncs to Dropbox via rclone:
# Get list of databases
DATABASES=$(psql -t -c "SELECT datname FROM pg_database WHERE datistemplate = false AND datname != 'postgres';")
for DB in $DATABASES; do
pg_dump -Fc "$DB" > "/backups/${DB}_$(date +%Y%m%d).dump"
done
# Sync to cloud
rclone sync /backups dropbox:infra-backups --include "*.dump"
# Clean up local backups older than 7 days
find /backups -name "*.dump" -mtime +7 -delete
The backup runs as a Docker profile: docker compose --profile backup up backup. I’ll add a cron job once I trust the setup.
What I’m Still Figuring Out
Deployment workflow: Right now I SSH in and run docker compose up -d manually. Want to set up a simple git pull && restart script, maybe triggered by GitHub Actions.
Monitoring: No alerting if the server runs out of disk space or a container crashes. Probably overkill for hobby projects, but would be nice.
ARM compatibility: Most Docker images support ARM64 now, but occasionally you find one that doesn’t. Haven’t hit issues with Postgres or Prefect, but some dependencies might only have x86 builds.
Repo Structure
I landed on three separate repositories:
infra— Docker Compose, init scripts, backup scriptsautomations— Prefect flows and Python scriptswebapp— Application code
Each can be deployed independently. A bug fix in the web app doesn’t require touching the database configuration.
Was It Worth It?
For $7/month I have:
- 8GB RAM server that can run multiple apps
- Automated daily backups to Dropbox
- Secure HTTPS with no open firewall ports
- Prefect dashboard for running scheduled Python scripts
The setup took about 4 hours, mostly due to DNS caching issues and learning Cloudflare’s newer “Hostname Routing” UI. Now that it’s running, adding new apps should be straightforward—just add another public hostname in Cloudflare and a new service in docker-compose.
Project Structure
The automations repo is separate from infrastructure:
automations/
├── flows/ # Prefect flow definitions
│ ├── cars_flow.py
│ └── quotes_flow.py
├── tasks/ # Reusable task functions
│ ├── car_listings.py
│ ├── car_scoring.py
│ └── slack.py
├── utils/ # Database models, helpers
├── prefect.yaml # Deployment definitions
└── pyproject.toml
Flows are thin orchestration layers. Tasks do the actual work and can be reused across flows.
Deployment Config
The prefect.yaml registers flows with the server:
name: automations
prefect-version: 3.0.0
pull:
- prefect.deployments.steps.set_working_directory:
directory: "{{ $AUTOMATIONS_DIR }}"
deployments:
- name: car-scraper
entrypoint: flows/cars_flow.py:fetch_car_listings
work_pool:
name: default-agent-pool
- name: daily-quotes
entrypoint: flows/quotes_flow.py:developer_wisdom_flow
work_pool:
name: default-agent-pool
schedules:
- cron: "0 8 * * *"
timezone: "America/Chicago"
Deploy with prefect deploy --all. Start a worker to pick up jobs:
prefect worker start --pool default-agent-pool
Real Example: Finding a Car for Dad
My dad needed a car in Nepal. I built a pipeline to pull listings from a local marketplace API, enrich them with AI, and score them for the Nepal market.
The task pattern with retries for flaky APIs:
@task(retries=3, retry_delay_seconds=5)
def fetch_listings_page(page_number: int) -> dict:
response = httpx.get(API_URL, params={"page": page_number}, timeout=30.0)
return response.json()
AI enrichment asks OpenAI about Nepal-specific factors:
def fetch_nepal_assessment(year: int, make: str, model: str) -> dict:
prompt = f"""For the {year} {make} {model}, assess for the NEPAL market:
- parts_availability: 1-10 (Toyota/Hyundai parts are everywhere)
- parts_affordability: 1-10 (Japanese cars cheap, European expensive)
- popularity_nepal: 1-10 (Swift, i10, Creta are common)
Return JSON only."""
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": prompt}],
temperature=0,
)
return json.loads(response.choices[0].message.content)
Scoring weights what matters for Nepal roads:
WEIGHTS = {
"ground_clearance": 6, # >= 170mm for rough roads
"airbags": 7,
"parts_availability": 7,
"parts_affordability": 6,
"popularity_nepal": 5,
}
The flow pulls listings, enriches each with specs and market data, scores them, and saves to Postgres. I query for high-scoring cars under budget and send results to my dad.
This is the kind of thing that would take hours manually but runs unattended on a $7 server
If you’re running similar hobby infrastructure, I’d love to hear how you handle the deployment workflow. Drop me an email at hello@ashishacharya.com.