RouteHardenHire us
← Back to blog
Self-Hosted Privacy··8 min read

Self-hosting Vaultwarden without making it fragile

How to deploy Vaultwarden behind a reverse proxy, lock down signups and admin surfaces, handle WebSocket logging safely, and back it up properly.

The right default Vaultwarden deployment is boring on purpose.

That is not an insult. It is the whole point.

Vaultwarden is the kind of service where cleverness is often the enemy. You are self-hosting a password manager. That means the questions that matter are not "did I containerize this stylishly?" but:

  • is the service exposure minimal?
  • is the reverse proxy sane?
  • are signups and admin surfaces locked down?
  • are backups real?
  • will I be able to restore this under stress?

If those answers are good, the setup is good.

Start with the container and a local bind

Vaultwarden's README describes it as an alternative implementation of the Bitwarden server API for self-hosted use, especially where the official server stack is too heavy. The same README strongly points you toward running it behind a reverse proxy rather than depending on built-in TLS.

The boring deployment pattern looks like this:

docker run --detach --name vaultwarden \
  --env DOMAIN="https://vault.example.com" \
  --volume /srv/vaultwarden:/data/ \
  --restart unless-stopped \
  --publish 127.0.0.1:8000:80 \
  vaultwarden/server:latest

Or in Compose:

services:
  vaultwarden:
    image: vaultwarden/server:latest
    restart: unless-stopped
    environment:
      DOMAIN: "https://vault.example.com"
      SIGNUPS_ALLOWED: "false"
    volumes:
      - ./vw-data:/data
    ports:
      - 127.0.0.1:8000:80

That local bind matters. It means the app itself does not need to be directly internet-reachable. Your edge layer gets to be the thing that is exposed, not the password manager process.

For a service like this, reducing direct exposure is not optional polish. It is table stakes.

It also lines up with Vaultwarden's own browser expectations. The README notes that the web vault needs a secure context for the Web Crypto API, which means HTTPS in normal use or the special localhost case during local testing. That is one more reason the reverse-proxy pattern is the sane default: the edge can provide the correct secure context without teaching the application container to be your TLS endpoint.

Reverse proxy and HTTPS belong at the edge

Vaultwarden's README and hardening guidance both reinforce the same architecture:

  • keep Vaultwarden behind a reverse proxy
  • prefer the proxy for public TLS termination
  • avoid treating built-in Rocket TLS as the public production default

That is the right call.

A reverse proxy gives you one clean edge for:

  • TLS management
  • hostname handling
  • client IP forwarding
  • access logging decisions
  • optional upstream access controls

That separation also helps you later. Certificate work, hostname policy, and optional auth controls belong at the edge. The password-manager container should not double as your networking laboratory.

The proxy examples wiki also reminds you to forward the real client IP and, in some setups, the correct forwarded protocol information. Those headers are not busywork. They are how the app and the proxy agree about the request story.

This also pairs naturally with /blog/cloudflare-tunnel-self-host if you want controlled inbound access without exposing a public listening port directly.

Hardening that actually matters

The Hardening Guide is useful because it focuses on the boring-dangerous defaults instead of cosplay.

The first hardening moves I would make on almost every private deployment are:

  • disable open registration
  • consider disabling invitations
  • disable password hints if you do not need them
  • treat the admin surface as a deliberate maintenance interface, not a casual feature

These are not dramatic changes. They are exactly the kind of exposure reductions people skip because they are too ordinary to feel like security work.

That is a mistake.

If you are not intentionally offering a public sign-up service, there is no good reason for open registration to stay enabled. If you are not deliberately operating user invitations, there is no good reason to leave that workflow exposed either.

This is the same logic as /blog/network-opsec-checklist: fewer reachable or usable surfaces means fewer surprises.

Password hints deserve the same blunt treatment. If the feature is not part of a deliberate support model, disable it. Helpful little recovery affordances are exactly the sort of thing attackers appreciate when operators leave them on out of habit.

The admin surface deserves respect

Vaultwarden's admin page is powerful enough that it should feel slightly inconvenient.

The official wiki around enabling the admin page and broader hardening guidance implies the right stance clearly:

  • use an admin token
  • prefer hardened token handling
  • avoid casual or plaintext exposure patterns
  • do not treat the admin page as a normal user-facing feature

If a deployment is "easy to admin from anywhere" in a loose, casual sense, that is often a warning sign rather than a success.

The admin page should feel like something you enter on purpose, with a proper token workflow and a reason. Convenience is not the right north star for the most privileged surface in a self-hosted password manager.

The same philosophy applies to reverse-proxy reachability. Just because you can make the admin surface reachable from everywhere through the same hostname path does not mean you should. Keeping privileged workflows boring and deliberate is one of the simplest ways to reduce the number of emergency decisions you will regret later.

WebSocket notifications are useful, and a little dangerous

Vaultwarden's WebSocket notifications page says notifications are used by browser, desktop, and browser-extension clients. Mobile clients use native push services instead.

That is useful functionality. It is also where one of the easier-to-miss logging problems lives.

The hardening guide warns that notification requests can place an access_token parameter into reverse-proxy access logs unless you handle logging carefully. That is exactly the sort of leak people discover months later while looking through archives they never expected to matter.

So if you enable notifications, add a boring reminder to your deployment notes:

- keep Vaultwarden on localhost or a private container network
- terminate HTTPS at the reverse proxy
- forward the real client IP correctly
- protect or disable noisy access logs for notification URLs

This is not a reason to avoid notifications. It is a reason to stop pretending logging defaults are harmless.

And it is also a reason to review proxy log scope deliberately. "We log everything by default" is not a neutral posture when query strings can contain valuable session material.

Backups are not optional

The most common Vaultwarden mistake is not cryptography. It is treating backups like a footnote while self-hosting a service that stores secrets people expect to survive hardware failure.

The official backup guide gives you the important primitives. For SQLite, the recommended backup style is:

sqlite3 data/db.sqlite3 ".backup '/path/to/backups/db-$(date +%Y%m%d-%H%M).sqlite3'"
sqlite3 data/db.sqlite3 "VACUUM INTO '/path/to/backups/db-compact-$(date +%Y%m%d-%H%M).sqlite3'"

That is good because it is explicit and restorable.

The point is not just "copy the database file sometimes." The point is:

  • know what lives under /data
  • back up the right state consistently
  • make restoration straightforward

Then actually test a restore once. A backup you have never restored is evidence of optimism, not proof of recoverability. For a password manager, restore drills matter because the service is usually fine right up until the day you need it back immediately.

And when you test restore, test the whole boring chain:

  • container comes back up
  • reverse proxy still points at the right local bind
  • clients can log in
  • notifications still behave the way you expect

That may sound excessive until the first restore exposes that only the database came back cleanly while the rest of the service assumptions did not.

If your backup plan is vague, your deployment is fragile no matter how polished the container setup looks.

This is also where self-hosting ideology gets boring quickly. The responsible self-hosting stance is not "trust no cloud." It is "if I take custody of the system, I also take custody of recovery."

What I would actually deploy

For a normal serious private deployment:

  1. Vaultwarden bound to localhost or a private container network
  2. reverse proxy in front
  3. HTTPS terminated at the proxy
  4. signups disabled
  5. invitations considered deliberately, not left on by inertia
  6. password hints disabled unless you truly want them
  7. admin token handled carefully
  8. WebSocket logging reviewed
  9. backups automated and restore-tested

That is enough to be a good deployment. You do not need to turn it into a security-theater diorama.

And if you want more defense around the edge, keep it aligned with the same boring principle: cleaner exposure, simpler recovery, fewer privileged surfaces. Fancy layering that makes routine maintenance harder is usually the wrong trade for this service.

This is also why I would rather see a clean localhost bind plus a normal reverse proxy than an unnecessarily ornate edge stack with five special cases. The service you are protecting contains secrets. Clarity is defensive here.

The RouteHarden opinion

Self-hosting Vaultwarden is perfectly reasonable if you are willing to be boring about it.

The most common failure mode is not that Vaultwarden itself is unserious. It is that operators deploy it like a convenience app instead of like the small secret-management service it actually is. They publish it too directly, leave signups looser than intended, treat the admin surface casually, and postpone backup discipline until after the first scare.

Do the opposite.

Keep it local. Put a sane reverse proxy in front. Minimize exposed workflows. Be careful with notification logs. Back it up like you mean it.

That is not flashy. It is better.

If the deployment drifts toward "convenient enough that I forget this is a password manager," pull it back. Discipline is the feature here.