Breaking
ERROR FIXEADDRINUSEnodewire.net →

Fix EADDRINUSE: address already in use in Node.js (Mac, Linux, Windows)

Fix EADDRINUSE: address already in use in Node.js on Mac, Linux, and Windows. The kill commands per OS, the TIME_WAIT trap, graceful shutdown, and how to stop it from coming back.

I had this one break in prod on a Friday afternoon. The server threw EADDRINUSE: address already in use :::3000 and wouldn’t come back up. The deploy script restarted the process, the new process tried to bind port 3000, the old one was still holding it, and the supervisor kept retrying in a loop until somebody noticed in Slack. Total damage: about three minutes of dropped traffic and one engineer’s evening.

The fix takes thirty seconds once you know it. The longer story — why EADDRINUSE in Node.js happens, why the obvious fix doesn’t always work, and how to prevent it from coming back — is what this article covers. Mac, Linux, Windows. Node 20 LTS, Node 22.

The fix in one paragraph (skip ahead to the rest if you have 60 seconds)

Find the process holding the port, kill it, restart your app. Three commands per OS, copy-paste:

bash
# macOS / Linux
lsof -i :3000              # find PID listening on 3000
kill -9 <PID>              # kill it
node server.js             # restart
bash
# Windows (PowerShell)
Get-NetTCPConnection -LocalPort 3000 | Select OwningProcess
Stop-Process -Id <PID> -Force
node server.js

One-liner for the impatient:

bash
# macOS / Linux
kill -9 $(lsof -t -i :3000)

# Windows
Stop-Process -Id (Get-NetTCPConnection -LocalPort 3000).OwningProcess -Force

That solves the symptom. Now the part that prevents it next time.

Why EADDRINUSE happens (the part most tutorials skip)

Five real causes I have personally debugged. The first three account for 90% of incidents:

  1. Old process did not exit. You hit Ctrl+C in the wrong terminal, or your supervisor crashed mid-restart. The previous Node process is still running and still bound to the port.
  2. nodemon orphaned a child. nodemon spawns your script as a child process, watches files, restarts on change. If nodemon crashes or you kill it ungracefully, the child can survive and keep the socket.
  3. Port reuse race after a previous crash. When a process holding a TCP socket dies abruptly, the kernel keeps the socket in TIME_WAIT for ~60 seconds before releasing it. Your new process binds and gets EADDRINUSE during that window.
  4. Two services configured for the same port. Docker Compose with two services both forwarding host port 3000. The second one starts and gets the error. Common with monorepos that copy .env.example blindly.
  5. Some other random process is on that port. Skype historically, AirPlay Receiver on macOS Monterey+ (port 5000), Windows IIS development sites. Hard to diagnose because you don’t expect it.

The wrong fix everyone tries first

This is the diagnostic Stack Overflow usually offers and the one I see misused most often:

bash
sudo killall node          # kills every Node process on the machine

It works. It also kills the unrelated Next.js dev server in your other terminal, the language server backing your IDE, and the background script processing analytics. Use targeted kill via lsof; never broadcast-kill Node.

Find the right PID, on every OS

OS Command What it shows
macOS lsof -i :3000 -sTCP:LISTEN PID, command, user, file descriptor
Linux (modern) ss -tlnp | grep :3000 Same shape, faster than lsof
Linux (any) lsof -i :3000 -sTCP:LISTEN Universal fallback
Windows (cmd) netstat -ano | findstr :3000 PID in last column
Windows (PS) Get-NetTCPConnection -LocalPort 3000 OwningProcess column

Verify what you are about to kill:

bash
ps -p <PID> -o pid,cmd     # macOS / Linux
Get-Process -Id <PID>       # Windows PowerShell

If it is your own Node script, kill it. If it is something you don’t recognise (a system service, your IDE), don’t.

Why kill sometimes fails — and the SIGKILL escalation

Three signals worth knowing:

Signal Number Behaviour
SIGTERM 15 “Please exit.” Process can clean up. Default for plain kill.
SIGINT 2 Same as Ctrl+C. Most Node apps handle this gracefully.
SIGKILL 9 “Die now.” Cannot be caught or ignored. Releases the socket immediately.

Try graceful first: kill <PID>. If the process refuses to exit (rare; usually because of an unhandled promise blocking shutdown), escalate: kill -9 <PID>. SIGKILL is the nuclear option — the process gets no chance to flush logs or close database connections. Use it sparingly, but use it when needed.

The TIME_WAIT trap (and how to actually fix it)

You restart your server. The old process is gone, lsof shows nothing, and Node still throws EADDRINUSE. This is the TCP TIME_WAIT state — the kernel holds the socket for up to 60 seconds after the previous owner closed it ungracefully.

Three ways out:

  1. Wait it out. 60 seconds, retry. Boring, always works.
  2. Use a different port. Set PORT=3001, restart. Same effect, different port.
  3. SO_REUSEADDR on the listener. Tells the kernel “let me bind even if the address is in TIME_WAIT.” Node’s HTTP server doesn’t expose this option directly, but a graceful exit handler avoids landing in TIME_WAIT in the first place:
TypeScript
// src/server.ts
import { createServer } from 'http';

const server = createServer(/* your app */);
server.listen(3000);

const shutdown = (signal: string) => {
  console.log(`Received ${signal}, closing server`);
  server.close(() => {
    console.log('Server closed cleanly');
    process.exit(0);
  });

  // Hard ceiling — if shutdown takes more than 10s, force exit.
  setTimeout(() => {
    console.error('Forcing exit after timeout');
    process.exit(1);
  }, 10_000).unref();
};

process.on('SIGTERM', () => shutdown('SIGTERM'));
process.on('SIGINT', () => shutdown('SIGINT'));

server.close() stops accepting new connections and waits for in-flight ones to finish. That clean close skips the TIME_WAIT problem on next boot. The 10-second hard ceiling prevents shutdown from hanging if a long-running request never returns.

nodemon orphans: kill the parent and the child both

If you run with nodemon and Ctrl+C doesn’t release the port:

bash
# macOS / Linux
pkill -f "node|nodemon"          # kills every node and nodemon process

# Or more surgical:
ps aux | grep node               # find the children
kill -9 <child PID>

Long-term fix: configure nodemon to forward signals to its child:

bash
// nodemon.json
{
  "execMap": { "js": "node" },
  "signal": "SIGTERM"            // forward SIGTERM to the child instead of SIGUSR2
}

Bind to a random free port instead of failing

Useful for development and tests. Pass 0 as the port and let the kernel pick:

TypeScript
const server = app.listen(0, () => {
  const { port } = server.address() as { port: number };
  console.log(`Listening on http://localhost:${port}`);
});

For production where you do need port 3000, detect the conflict and exit cleanly with a useful message instead of the default crash:

TypeScript
server.on('error', (err: NodeJS.ErrnoException) => {
  if (err.code === 'EADDRINUSE') {
    console.error(`Port ${PORT} is already in use. Run: lsof -i :${PORT}`);
    process.exit(1);
  }
  throw err;
});

The macOS AirPlay trap (and Windows equivalents)

macOS Monterey+ binds port 5000 to AirPlay Receiver by default. If you set PORT=5000 in your .env on a Mac, you get EADDRINUSE the first time you boot. Either change to 3000 or disable AirPlay Receiver in System Preferences → General → AirDrop & Handoff.

On Windows, IIS Express commonly holds 80 and 443. Hyper-V reserves random port ranges; check with netsh interface ipv4 show excludedportrange protocol=tcp.

Production checklist

  • Graceful shutdown handler on SIGTERM and SIGINT, with a 10-second hard ceiling.
  • server.on('error') handler that prints a useful diagnostic on EADDRINUSE and exits 1.
  • Process supervisor (PM2 or systemd) with restart-on-crash and a backoff between restarts. Without backoff, EADDRINUSE on boot becomes a tight CPU loop.
  • Document the port in .env.example with a comment about what it is for.
  • One service per port. Reserve ports per app in a team-wide table; never assume “3000 is free.”
  • Use the get-port package in tests to pick a free port automatically. Fixes flaky CI.
  • Don’t run as root. Binding to 80/443 needs root or capabilities; put nginx in front instead.

Troubleshooting FAQ

Why does my port stay busy after I kill the process?

TCP TIME_WAIT. The kernel holds the socket for up to 60 seconds after an ungraceful close. Wait, switch ports temporarily, or fix the shutdown handler so future closes are graceful.

Can I bind multiple Node processes to the same port?

Yes, with Node’s cluster module or PM2 cluster mode. The OS load-balances connections between workers. This is the supported pattern; sharing a port between unrelated processes is not.

What is the difference between kill and kill -9?

kill sends SIGTERM (graceful). kill -9 sends SIGKILL (immediate, uncatchable). Try graceful first; escalate to -9 only if the process refuses to exit. Note that SIGKILL skips your shutdown handler — you may leave the socket in TIME_WAIT.

Does EADDRINUSE happen on Unix sockets too?

Yes. Same error code, different fix: delete the socket file. rm /tmp/your.sock before binding.

How do I find which Node app holds the port if I have many?

lsof -i :3000 -sTCP:LISTEN shows the PID; ps -p <PID> -o cmd shows the full command line. The command line usually identifies the app — different cwd, different script name.

Can my CI pipeline hit this?

Yes, on shared runners. Use a random port (listen(0)) for tests, or use Docker with explicit port mappings that fail loudly when conflicting.

Why does Docker say port is in use when nothing is listening?

Either another container has the host port, or you stopped a container and Docker hasn’t released the port yet. docker ps -a to see stopped containers, docker port <name> to confirm mapping. Restart Docker daemon as last resort.

Should I use a high port to avoid conflicts?

For development, yes — pick something memorable above 1024 (3000, 4000, 8080, 8000). For production, put nginx on 80/443 and proxy to your Node app.

What ships next

This article fixes the symptom and prevents it. The natural next step is graceful shutdown done properly — drain in-flight requests, close database connections, flush logs. If your shutdown hangs because of a hanging async handler, fixing the error handling fixes the shutdown too. If you are deploying behind PM2, the supervisor configuration matters as much as the code. If your boot fails earlier with “Cannot find module”, fix that first — the port-binding error never surfaces.