Two years ago I migrated a paying client off Heroku because the bill jumped to $487 a month for a Node.js API doing 40 req/s. We moved the entire stack — app, Postgres, Redis, monitoring — onto a single DigitalOcean droplet at $24 a month. Same uptime, faster latency, and we put the savings into a second droplet for hot standby. Two years on the new setup has had eight minutes of unscheduled downtime, all of which traced back to an nginx config I changed at 11 p.m. without testing.
That migration is the template for this article. Deploy a Node.js app to a DigitalOcean VPS the way I do it for clients in 2026: Ubuntu 24.04 LTS droplet, Node 20 LTS via NodeSource, PM2 for process management, nginx as reverse proxy, Let’s Encrypt for SSL, ufw for the firewall, and the deploy mechanism that lets me ship in 30 seconds without dropping a single in-flight request.
Quick start: 12 commands to a live HTTPS API
Skip ahead to the production sections after this works. Assumes you have a domain pointed at the droplet’s IPv4 and an Ubuntu 24.04 droplet provisioned (any size — the $6/month basic is fine for the first 100 req/s).
ssh root@your-droplet-ip
adduser deploy && usermod -aG sudo deploy
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
apt-get install -y nodejs nginx git
npm install -g pm2
su - deploy
git clone https://github.com/you/your-app.git
cd your-app && npm ci --production && npm run build
pm2 start dist/server.js --name api && pm2 save && pm2 startup
# back as root
nano /etc/nginx/sites-available/api # see config below
ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/
nginx -t && systemctl reload nginx
ufw allow OpenSSH && ufw allow 'Nginx Full' && ufw enable
apt-get install -y certbot python3-certbot-nginx
certbot --nginx -d api.yourdomain.com --redirectTwelve commands, twenty minutes, $6 a month. Now read the rest so it survives traffic.
What is wrong with the typical “deploy Node to a VPS” tutorial
Most VPS tutorials get you to “it responds on port 80” and stop. Six production failures I have personally inherited cleaning up other people’s setups:
- Running the Node process as root. One npm dependency with a postinstall script and someone owns your droplet.
- No process supervisor at all. Server crashes, never comes back up. PM2 (or systemd) restart on failure is not optional.
- nginx default config with no rate limiting. Your
/auth/loginendpoint takes credential-stuffing traffic from a botnet your first week live. - SSH on port 22 with password auth enabled. Two days after going live the auth log fills with brute-force attempts. Turn off password SSH; use keys.
- No firewall. Postgres on 5432 listening on the public interface because you forgot to bind to localhost.
- Manual zero-downtime deploys via “kill the process, hope nginx queues.” One in twenty deploys drops requests. PM2 reload solves this and almost no one uses it.
Droplet sizing: what to actually pick
| Workload | Plan | Specs | Cost |
|---|---|---|---|
| Hobby project, 0–100 req/s | Basic | 1 vCPU · 1 GB · 25 GB SSD | $6/mo |
| Small API, 100–500 req/s | Basic | 2 vCPU · 4 GB · 80 GB SSD | $24/mo |
| Real product, 500–2000 req/s | General Purpose | 4 vCPU · 16 GB · 100 GB SSD | $84/mo |
| Co-host Postgres on same droplet | + size up RAM | add 4–8 GB for shared_buffers | — |
Honest opinion: start at $24. The $6 droplet runs out of RAM the first time you run npm ci on a Next.js app. Buying small to “save money” costs more than the $18/month you save when you have to rebuild the droplet at 1 a.m. because builds OOM.
Initial droplet hardening (do this before anything else)
The first ten minutes after the droplet boots, before you install anything else:
ssh root@your-droplet-ip
# Create a non-root user with sudo
adduser deploy
usermod -aG sudo deploy
# Lock down SSH
mkdir -p /home/deploy/.ssh
cp ~/.ssh/authorized_keys /home/deploy/.ssh/
chown -R deploy:deploy /home/deploy/.ssh
chmod 700 /home/deploy/.ssh
chmod 600 /home/deploy/.ssh/authorized_keys
# /etc/ssh/sshd_config — set these
sed -i 's/^#*PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config
sed -i 's/^#*PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config
systemctl restart sshd
# Firewall
ufw default deny incoming
ufw default allow outgoing
ufw allow OpenSSH
ufw enable
# Automatic security updates
apt-get install -y unattended-upgrades
dpkg-reconfigure --priority=low unattended-upgradesTest SSH as deploy in a second terminal before closing the root session. Lock yourself out once and you will remember.
Installing Node.js the right way
Three options, only one is correct in 2026:
| Method | Verdict |
|---|---|
Ubuntu’s apt install nodejs |
No. Ships ancient versions, one major behind. |
| nvm | Fine for development. Wrong for production: per-user, awkward under PM2 and systemd. |
| NodeSource apt repo | Yes. System-wide install, current LTS, plays nicely with PM2 and systemd. |
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
node --version # v20.x.x
npm --versionProcess management: PM2 vs systemd
I have shipped both. The honest take: PM2 if you want zero-downtime reload and a built-in cluster mode. systemd if you have one Node process and want minimal dependencies.
npm install -g pm2
# Start the app under your deploy user
cd /home/deploy/your-app
pm2 start dist/server.js \
--name api \
-i max \ # cluster mode, one worker per CPU
--max-memory-restart 600M \ # restart any worker that goes over
--time # timestamps in logs
# Generate a systemd service so PM2 starts on boot
pm2 startup systemd -u deploy --hp /home/deploy
pm2 savePM2 cluster mode is the underrated win. With -i max, PM2 forks one Node process per CPU core and load-balances incoming connections at the OS level. A 4-vCPU droplet doing 600 req/s with one worker becomes 2,400 req/s with four — for free.
nginx as reverse proxy
Why nginx and not “expose Node directly on port 80”: TLS termination, request buffering, gzip / brotli, static file serving, and rate limiting. Node could do all of this. nginx does it faster and you don’t have to write the code.
The config that ships:
# /etc/nginx/sites-available/api
upstream api_backend {
server 127.0.0.1:3000 fail_timeout=0;
keepalive 64;
}
# Per-IP rate limit zone — 10 req/s with burst of 20
limit_req_zone $binary_remote_addr zone=api_limit:10m rate=10r/s;
server {
listen 80;
listen [::]:80;
server_name api.yourdomain.com;
client_max_body_size 5M;
gzip on;
gzip_types application/json text/plain;
location / {
limit_req zone=api_limit burst=20 nodelay;
proxy_pass http://api_backend;
proxy_http_version 1.1;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
# Don't buffer SSE / streaming responses (e.g. OpenAI streams)
proxy_buffering off;
proxy_read_timeout 60s;
proxy_send_timeout 60s;
}
# Tighter limit on auth endpoints — credential stuffing protection
location /auth/ {
limit_req zone=api_limit burst=5 nodelay;
proxy_pass http://api_backend;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}sudo ln -s /etc/nginx/sites-available/api /etc/nginx/sites-enabled/
sudo rm /etc/nginx/sites-enabled/default
sudo nginx -t && sudo systemctl reload nginx
sudo ufw allow 'Nginx Full'The proxy_buffering off line is the one most tutorials miss. With buffering on, nginx waits for the entire upstream response before forwarding to the client — kills streaming responses from OpenAI and any SSE endpoint.
SSL with Let’s Encrypt and Certbot
sudo apt-get install -y certbot python3-certbot-nginx
sudo certbot --nginx -d api.yourdomain.com --redirect
sudo systemctl status certbot.timer # auto-renewal is on by defaultCertbot edits your nginx config in place and adds the HTTP → HTTPS redirect. Renewal runs twice a day via the systemd timer. Let’s Encrypt certificates last 90 days; auto-renewal triggers at 60 days. Set up an email alert from the renewal job — silently failing renewal is the most common SSL outage I see in the wild.
Zero-downtime deploys
The deploy script I run from a CI pipeline (GitHub Actions, GitLab CI, manually — same commands):
#!/bin/bash
# deploy.sh — run on the droplet
set -euo pipefail
cd /home/deploy/your-app
git pull --ff-only
npm ci --production
npm run build
npx prisma migrate deploy # if you use Prisma
pm2 reload api --update-env # zero-downtime; PM2 brings new workers up before killing oldThe magic is pm2 reload, not restart. Reload starts new workers, lets them pass the readiness check, then kills the old workers — the listening socket never closes from nginx’s perspective. PM2’s cluster mode is what makes this work.
From your laptop:
ssh deploy@api.yourdomain.com 'bash /home/deploy/your-app/deploy.sh'Roll back: git reset --hard HEAD~1 && pm2 reload api. Keep the previous build artefact around for 24 hours; deleting it three minutes after deploy means a slow incident response.
What breaks in staging that won’t in prod (and vice versa)
| Behaviour | Staging | Production |
|---|---|---|
| Node memory ceiling | Usually fine on 1 GB droplet | OOM at peak load — need V8 --max-old-space-size |
| nginx rate limit | Never triggered by you alone | Triggers under real traffic — tune zone size and burst |
| PostgreSQL connection pool | 5 connections plenty | Pool exhaustion at 30 concurrent users (see Prisma setup) |
| Certbot renewal | Works first time | Fails 60 days in if port 80 is blocked or DNS moved |
| PM2 max-memory-restart | Never fires | Fires on memory leak — symptom of deeper bug that needs fixing |
Production checklist
- Non-root deploy user with SSH key auth only. Root SSH disabled.
- ufw enabled with only OpenSSH and Nginx Full open.
- PM2 cluster mode with one worker per CPU and
--max-memory-restartset. - nginx with rate limiting on at least
/auth/endpoints. proxy_buffering offif you serve SSE or streaming responses.- Let’s Encrypt with HTTPS redirect, auto-renewal verified by manually triggering once.
- PM2 startup script registered so the app comes back after droplet reboot. If your Node app crashes with EADDRINUSE on restart, the graceful shutdown handler in that article is what fixes it.
- Log rotation: PM2’s default logs grow forever. Install
pm2-logrotateand set max-size. - Off-droplet backups for any database living on the droplet — DigitalOcean Spaces or S3, nightly cron.
- Monitoring: at minimum, UptimeRobot on the public URL and DigitalOcean’s built-in droplet alerts for CPU/memory.
When not to use a DigitalOcean droplet
Three cases where a managed platform pays for itself:
- You ship a SPA + API and want to deploy in < 60 seconds with one command. Vercel, Render, or DigitalOcean App Platform are objectively better at “git push and forget.”
- Your traffic is spiky and serverless-shaped. A droplet you pay for 24/7 is bad economics for a webhook that runs 200 times a day. Use AWS Lambda or Cloudflare Workers.
- Compliance requires SOC 2 / HIPAA without you operating the controls. Managed platforms inherit their compliance posture; a droplet is your problem.
Troubleshooting FAQ
Why is my Node.js app slow when nginx says it forwarded the request immediately?
Almost always your database. Run EXPLAIN ANALYZE against the slow endpoint’s query, or look at pg_stat_statements. nginx → PM2 → Node has microseconds of overhead; the ten seconds are downstream.
How do I share a droplet between an app and Postgres?
Bind Postgres to localhost only (listen_addresses = 'localhost' in postgresql.conf), allocate shared_buffers = 25% of RAM, set max_connections to twice your PM2 worker count plus headroom. Cleanest way: separate small Postgres droplet at $6/month, even at low traffic — easier to scale, easier to back up.
Should I use Docker on a DigitalOcean droplet?
If you have one app, no — Docker adds 200 MB of image overhead and a layer of indirection for no benefit. If you run multiple apps on the same droplet, yes — Docker Compose is cleaner than juggling PM2 namespaces.
How do I deploy from a CI pipeline?
Generate an SSH deploy key, store the private key as a CI secret, run the deploy script over SSH on the droplet. appleboy/ssh-action for GitHub Actions is the simplest. Cap the deploy key to the deploy user, never use root.
What if I need to host the database on the same droplet?
Fine for small apps. Two rules: bind to localhost, and tune shared_buffers + max_connections for the available RAM. My Postgres + Prisma setup article covers the Node side.
How do I do blue-green deployment on one droplet?
Run two PM2 processes on different ports (3000, 3001), switch the nginx upstream between them, reload nginx. More effort than pm2 reload for marginal gain on a single droplet — worth it once you have two droplets behind a load balancer.
Why did my droplet run out of disk after a month?
Three usual suspects: PM2 logs (install pm2-logrotate), nginx access logs (logrotate config), and Docker layer cache if you use Docker (docker system prune -a nightly).
Render or Fly.io instead?
If you value developer experience over per-month cost, yes. Render’s free tier is fine for hobby projects; Fly.io’s edge deployment is genuinely useful for low-latency global apps. The droplet path wins on transparency, control, and cost at any meaningful traffic level.
What ships next
This article gets you to a hardened single-droplet deployment. The natural next steps: a load balancer in front of two droplets for high availability, a separate managed Postgres for backups and read replicas, and a CI/CD pipeline that runs migrations and pushes new builds without manual SSH. Both pieces are queued. If you are building auth on top of this, the JWT pattern in that article assumes exactly the deploy shape above.