OpenVPN, the friendly compromise
Why OpenVPN lasted so long: TLS in user space, TUN vs TAP, UDP vs TCP, and the flexibility costs that newer tunnels tried to remove.
OpenVPN won the 2010s VPN era. Not because it was elegant — it isn't — and not because it was fast — it isn't. It won because it ran in user space on every operating system, used TLS that firewalls were already trained to allow, tunneled comfortably through NAT and HTTP proxies, and gave administrators a familiar PKI workflow. By the time WireGuard reached production maturity in 2020, OpenVPN had over a decade of corporate inertia: working configs, trained operators, integrated MFA, certificate-management tools, and audited compliance configurations. Replacing that takes years, and in many environments it hasn't happened yet.
This module dissects what OpenVPN actually does on the wire and inside its process: why its design made it the most deployable VPN of its era, what each architectural choice cost, and where it still wins after WireGuard exists. We'll cover TUN versus TAP, UDP versus TCP transport, static-key versus TLS modes, how the control and data channels coexist in a single transport flow, and the operational consequences of running tunnel crypto in a user-space process.
Prerequisites
udp-the-simplest-transport— OpenVPN's normal outer transport is UDP and the design assumes UDP semantics.tls-1-3-handshake-byte-by-byte— OpenVPN borrows TLS for control-channel authentication and key derivation.stream-ciphers-and-aead-construction— the data channel is AEAD (AES-GCM or ChaCha20-Poly1305 in OpenVPN 2.6+).
Learning objectives
- Explain OpenVPN's design as a TLS-secured tunnel running in user space, in contrast to IPsec's kernel-network-layer architecture.
- Compare TUN versus TAP, UDP versus TCP transport, and static-key versus TLS modes — and pick the right defaults.
- Explain why OpenVPN remained dominant for so long despite WireGuard's simplicity and performance advantages.
- Diagnose the main protocol-level costs OpenVPN pays for its flexibility.
Why OpenVPN won the "works almost anywhere" era
VPN protocols of the early 2000s split into two unhappy camps. IPsec was the standards-body answer — IETF-blessed, kernel-integrated, AH/ESP/IKE/ISAKMP — and it worked, when it worked. NAT broke it. Vendor interoperability broke it. The configuration syntax was a nightmare. PPTP was the easy answer until it turned out to be cryptographically broken in 1998 and again, more thoroughly, in 2012. L2TP/IPsec smashed two protocols together to get authentication and encryption from different layers and inherited the worst configuration story of either.
OpenVPN, released in 2001 by James Yonan, made a stack of pragmatic choices that diverged from the standards-track answer at almost every layer:
- User space, not kernel. The tunnel endpoint runs as a normal Unix process. No kernel module to compile, no kernel-version coupling, no IKE daemon coordinating with an IPsec stack two layers down.
- TLS for the control channel. Authentication and key exchange ride on a TLS handshake the way they do for HTTPS. Operators already understood certificates. Existing PKI tooling worked.
- A single outer transport — UDP or TCP — on a single port. Firewalls saw one flow. NAT saw a normal connection it knew how to track. Administrators didn't have to whitelist three different protocol numbers.
- Cross-platform from day one. A portable C codebase plus the tun/tap kernel facility (or its WinTUN equivalent on Windows) meant Linux, BSD, macOS, Windows, iOS, Android, and OpenWrt routers all ran the same binary against the same configs.
None of that is technically elegant. It's just deployable. The IETF was busy publishing RFCs while OpenVPN was busy shipping a thing that worked through hotel WiFi captive portals.
The cost was performance and code complexity, both of which we'll get into. But the deployability story is why every consumer VPN provider, every corporate remote-access setup, every Linux self-hoster, and every router OS supported OpenVPN by 2015. WireGuard didn't even exist yet.
TUN, TAP, and what exactly gets tunneled
OpenVPN can present two kinds of virtual network interface to the operating system: TUN (layer 3) and TAP (layer 2). Choosing between them is the first architectural decision in any deployment.
TUN is a layer-3 device. The kernel hands OpenVPN raw IP packets — IPv4 or IPv6 datagrams with no Ethernet framing. OpenVPN encrypts each packet, wraps it in an OpenVPN data-channel header, and ships it over the outer UDP or TCP socket to the peer. The peer decrypts it and writes the IP packet back into its TUN device, where the receiving kernel routes it normally.
TAP is a layer-2 device. The kernel hands OpenVPN full Ethernet frames, complete with source/destination MAC addresses and EtherType. The same encrypt-wrap-send-decrypt-deliver cycle runs, but now the tunnel carries Ethernet, which means the peers behave as if they were on the same physical LAN segment. ARP works. Broadcasts work. Non-IP protocols work — IPX, AppleTalk, anything Ethernet can carry.
For 95% of modern deployments, TUN is correct. TAP is a layer-2 bridge across the internet, which sounds powerful and is mostly a liability:
- More overhead per packet. Every frame carries 14 bytes of Ethernet header that TUN doesn't need.
- Broadcast/multicast traffic crosses the tunnel. Every ARP request, every mDNS query, every printer-discovery broadcast on either side of the bridge gets encrypted and pushed through your VPN. On a noisy LAN this is a substantial fraction of traffic.
- MAC address learning becomes a design problem. Bridges need to know which MAC lives on which side. With more than two endpoints, you get loops and broadcast storms unless you're running spanning tree, which is its own headache over a tunnel.
- It blocks layer-3 routing tricks. Subnet aggregation, source-based policy routing, and multi-WAN setups all assume you're routing IP, not bridging Ethernet.
The legitimate reasons to pick TAP are narrow: you genuinely need to bridge two physical Ethernet segments (an unusual remote-office setup), or you have a non-IP protocol you must carry, or you have a Windows-only application that insists on broadcast discovery to find peers. Otherwise TUN is faster, quieter, and easier to reason about.
WireGuard removed this decision by being layer-3 only. There is no TAP-equivalent. The simplification is widely regarded as correct.
The cryptographic layer
OpenVPN doesn't define its own crypto. It composes existing primitives: TLS for control, AEAD ciphers for data, HMAC for additional integrity layers. The result is more flexible than WireGuard's fixed cryptosuite and considerably more confusing.
A connection has two logical channels multiplexed onto the same outer UDP or TCP flow:
- The control channel carries TLS handshake messages, key material, configuration push (server pushes routes, DNS settings, MTU adjustments), and rekey events. It's reliability-layered: OpenVPN implements its own ack/retransmit on top of the outer transport so that even on UDP, control messages aren't lost.
- The data channel carries the actual encrypted user traffic. It's not reliability-layered; lost data packets are just lost (unless the inner protocol is TCP, in which case the inner TCP retransmits — exactly as IP packet loss handling works on a normal network).
The control channel runs TLS, with mutual authentication via X.509 certificates (the standard production setup) or via TLS-PSK in rare configurations. The TLS handshake authenticates each side and derives the master secret used to bootstrap data-channel keys.
The data channel keys are derived from the TLS master secret using OpenVPN's own KDF (a TLS-1.0-style PRF in older versions; updated key derivation in 2.6+). Each direction gets its own key. Rekeying happens on a schedule — typically every hour, configurable via reneg-sec — and a fresh TLS handshake produces fresh data-channel keys without dropping the tunnel.
On top of the TLS authentication, OpenVPN historically supported two additional integrity layers on the control channel itself:
tls-authadds an HMAC signature over every control-channel packet using a static pre-shared key. Packets failing HMAC are dropped before TLS even processes them. The point isn't to add cryptographic strength to TLS — TLS is already authenticated — it's to filter junk early. An attacker who doesn't know the static HMAC key can't even get OpenVPN to spend CPU on a TLS handshake. This raises the bar for DoS and certain denial-of-service-by-handshake-flood attacks.tls-crypt(added in OpenVPN 2.4) goes one step further: it encrypts the entire TLS handshake using a pre-shared key. Now the handshake itself is opaque on the wire; certificates aren't even visible to a passive observer. This both raises the DoS bar and makes the protocol harder to fingerprint, which matters in environments where TLS-with-X.509-certs is a recognizable pattern censors target.
The data channel itself is now AEAD by default in OpenVPN 2.6: AES-GCM (128 or 256-bit) or ChaCha20-Poly1305. Older deployments may still use AES-CBC with a separate HMAC, which is the encrypt-then-MAC construction that TLS 1.0–1.2 used. OpenVPN 2.6 deprecated the non-AEAD modes and made data-ciphers AES-256-GCM:AES-128-GCM:CHACHA20-POLY1305 the default. If you're running an older config with cipher AES-256-CBC and auth SHA1, you're past time to upgrade.
The composition is straightforward in principle: TLS authenticates and bootstraps; the data channel uses fresh AEAD keys per direction; rekeying via TLS replaces those keys every hour. The complications are all in the option surface — there are dozens of --tls-*, --cipher, --auth, --data-ciphers, --ncp-* knobs, with subtle backward-compatibility interactions. WireGuard's choice to ship one cryptosuite (Curve25519 + ChaCha20-Poly1305 + BLAKE2s + Noise IK) and refuse to negotiate algorithms eliminates all of that.
UDP versus TCP transport
OpenVPN can wrap its tunnel in either UDP or TCP. This is one of the most consequential and most misunderstood configuration choices.
The default and correct answer for almost everyone is UDP. Here's why.
The traffic you're tunneling is, in the common case, mostly TCP — HTTP, SSH, IMAP, anything important. TCP includes its own retransmit-on-loss, in-order delivery, and congestion control. When you run TCP inside a TCP outer tunnel, you stack two reliability layers on top of each other, and they fight.
The pathology is called TCP-over-TCP meltdown, and the timeline looks like this. Suppose the outer TCP path drops a packet. The outer TCP detects the loss, waits for its retransmit timer, and retransmits. While the outer TCP is waiting, the inner TCP — which runs on a much shorter timer because it sees what looks like a clean local network — also notices it's missing data and retransmits its segment. Now the outer TCP eventually delivers both the original and the retransmit, and the inner TCP receives duplicate data, plus its own retransmits also queued behind the outer TCP's recovery. Throughput collapses; latency balloons; the connection feels like it's gargling underwater.
UDP avoids this entirely. The outer transport just delivers what it can. Lost outer packets mean lost inner packets, which the inner TCP handles with its normal congestion-control behavior — exactly as if the inner packets had been lost on a normal IP path with no tunnel involved. There's only one reliability layer. It works.
So why does TCP mode exist?
- Restrictive firewalls. Some networks block all UDP except DNS and NTP, sometimes deliberately to prevent VPN traffic. If your only options are TCP/443 or no tunnel, you take TCP/443. The performance hit is real but a slow tunnel beats no tunnel.
- Censorship environments. UDP-on-non-standard-ports is statistically suspicious in many traffic-analysis pipelines. TCP/443 looks like HTTPS. (This is also why protocols like sing-box's REALITY and Trojan-GFW choose TCP/TLS as their default carrier — see
xray-reality-vs-wireguardfor the full censorship-evasion story.) - Proxy traversal. HTTP CONNECT proxies and SOCKS proxies handle TCP. If your tunnel must go through such a proxy, TCP is mandatory.
- Some captive portals only allow TCP outbound. Hotel WiFi, conference networks, airline internet — UDP often gets dropped silently while TCP/443 passes.
The trade-off: in any of these environments, you're already getting suboptimal network performance, so the additional cost of TCP-over-TCP is one more papercut in a stack of papercuts. When the alternative is no tunnel at all, the math is easy.
The defensive move on long-haul lossy paths is to run UDP if at all possible, raise the inner MTU sensibly to avoid fragmentation, and accept that a slightly leaky tunnel is much faster than a perfectly reliable one with stacked retransmits.
Static-key mode versus TLS mode
OpenVPN has two fundamentally different authentication modes. Static-key mode uses a single pre-shared symmetric key for both endpoints. TLS mode uses certificates and performs a TLS handshake.
Static-key mode is the simple option. You generate one key on one peer with openvpn --genkey secret static.key, copy the file to the other peer, and reference it in both configs:
# server config (static-key mode)
dev tun
proto udp
local 0.0.0.0
lport 1194
remote-cert-tls client
secret static.key
ifconfig 10.8.0.1 10.8.0.2
# client config (static-key mode)
dev tun
proto udp
remote vpn.example.com 1194
secret static.key
ifconfig 10.8.0.2 10.8.0.1
That works. It also doesn't scale, and it gives up most of OpenVPN's value:
- No forward secrecy. If the static key leaks, every past and future session encrypted with it can be decrypted (assuming the attacker captured the ciphertext). TLS mode rotates ephemeral keys.
- No identity. The static-key tunnel can't distinguish "alice's laptop" from "bob's laptop" — it's just two endpoints sharing one secret. You can't revoke alice without revoking everybody.
- No two-factor or external-IDP integration. TLS mode supports
auth-user-pass-verifyplugins, PAM hooks, OIDC integration — none of which makes sense in static-key mode. - Manual key distribution. Every new peer needs the static key. Every key rotation means touching every endpoint.
Static-key mode exists for two real use cases: a one-off site-to-site link between two routers you control, and protocol experimentation. For everything else, TLS mode with a proper PKI is the right answer:
# server config (TLS mode)
dev tun
proto udp
port 1194
ca ca.crt
cert server.crt
key server.key
dh dh2048.pem
tls-crypt tls-crypt.key
data-ciphers AES-256-GCM:CHACHA20-POLY1305
verify-client-cert require
remote-cert-tls client
server 10.8.0.0 255.255.255.0
push "route 192.168.10.0 255.255.255.0"
push "dhcp-option DNS 10.8.0.1"
keepalive 10 60
persist-key
persist-tun
verb 3
# client config (TLS mode)
client
dev tun
proto udp
remote vpn.example.com 1194
resolv-retry infinite
nobind
persist-key
persist-tun
ca ca.crt
cert client-alice.crt
key client-alice.key
tls-crypt tls-crypt.key
data-ciphers AES-256-GCM:CHACHA20-POLY1305
remote-cert-tls server
verb 3
This is recognizably an OpenVPN config in the wild. The CA file establishes trust; the server and client present X.509 certificates signed by that CA; tls-crypt opaques the handshake; data-ciphers constrains what the data channel will negotiate; push tells the client what routes and DNS to use after the tunnel comes up.
Issuing per-user certificates and revoking them through a CRL or OCSP gives you real identity management. Distributing the CA cert to clients is once-per-deployment, not per-user. This is the model every enterprise OpenVPN deployment uses.
User-space flexibility versus kernel-space cost
Running the tunnel endpoint as a user-space process is OpenVPN's biggest architectural choice and biggest performance cost. Every packet does more work than it would in a kernel implementation.
The data path for an outbound packet looks roughly like this:
- An application calls
send()to a destination on the tunneled subnet. - The kernel routes the packet to the TUN interface.
- The kernel queues the packet on the TUN device's read queue.
- The OpenVPN process, blocked in
read()on the TUN file descriptor, wakes up — context switch from kernel to user space. - OpenVPN copies the packet from kernel to user-space buffer.
- OpenVPN runs the AEAD encrypt over the buffer.
- OpenVPN constructs the OpenVPN data-channel header, prepends it to the ciphertext.
- OpenVPN calls
sendto()on the outer UDP socket. - The kernel copies the assembled packet back from user space.
- The kernel runs the outer UDP send path — IP encapsulation, route lookup, NIC driver hand-off.
Each direction crosses the user/kernel boundary twice and copies the packet twice. Compare to WireGuard, which runs in the kernel: the packet stays in kernel memory, no context switches, no copies. On a busy server saturating a 10 Gbps link, this is a measurable difference — WireGuard typically reaches multiples of OpenVPN's per-core throughput.
Modern OpenVPN has narrowed the gap somewhat. Version 2.6 introduced DCO (Data Channel Offload), which pushes the data-channel encryption into a kernel module so the per-packet path stops crossing the user/kernel boundary. With DCO enabled, the user-space openvpn process handles only control-channel work — TLS handshakes, rekeys, configuration push — while the kernel module does the high-volume packet crypto. Reported numbers put DCO at multiples of legacy OpenVPN's throughput, narrowing but not closing the gap with WireGuard.
DCO is opt-in, requires a recent kernel, and isn't the default in most deployed OpenVPN configurations as of 2026. If you're running OpenVPN at scale, enabling DCO is generally a free win; if you're stuck on an older Linux distribution or running OpenVPN on Windows or macOS, you're paying the user-space cost.
The flip side of user-space is exactly what made OpenVPN dominant: portability and flexibility. Plugins via the OpenVPN plugin API, scripts via auth-user-pass-verify, custom routing logic, IDP integration — none of which would be appropriate to graft onto a kernel module. The user-space process is a reasonable place to put complex logic. It's a poor place to put a per-packet crypto fast path. DCO splits the difference.
OpenVPN's control-plane and data-plane split
A useful mental model for any tunnel is to separate the control plane (negotiation, authentication, key management) from the data plane (the bytes that move actual traffic). OpenVPN bundles both into the same outer transport flow, distinguishing them with an opcode in each packet header.
Every OpenVPN packet starts with a one-byte opcode (technically, a 5-bit opcode and a 3-bit key ID). The opcodes split into two families:
- Control-channel opcodes —
P_CONTROL_HARD_RESET_CLIENT_V2,P_CONTROL_HARD_RESET_SERVER_V2,P_CONTROL_V1,P_ACK_V1, etc. These carry TLS handshake records, ack messages, and control payloads. OpenVPN runs its own ack-retransmit logic over them so they survive UDP loss. - Data-channel opcodes —
P_DATA_V1andP_DATA_V2. These carry the encrypted tunneled IP packets. No retransmit; what's lost is lost.
Mixing both onto one outer flow simplifies firewall configuration (open one port, get everything) and complicates everything else. The control-channel ack/retransmit machinery is a TCP-lite implementation living inside OpenVPN, which is why TCP-over-OpenVPN-over-UDP can sometimes still feel sluggish — there's a control-channel reliability layer added to whatever the inner protocol is doing.
Rekeying happens periodically via the control channel. The default is every hour or every 64 MB of data, whichever comes first (configurable). When a rekey fires:
- The TLS layer initiates a renegotiation.
- A fresh TLS handshake derives new master-secret material.
- New data-channel keys are derived for both directions.
- Both peers switch to the new keys for outbound traffic, while still accepting old keys for in-flight inbound traffic for a brief grace window.
- Old keys are erased.
The grace window matters because UDP can deliver packets out of order. A packet sent under the old key while the rekey was in flight might arrive after both sides have switched — without the grace window, that packet would be dropped as "wrong key" and the inner TCP would retransmit unnecessarily.
The key-ID bits in the OpenVPN header are how the receiver knows which key to try. Each rekey advances the ID; the receiver maintains a small set of recent keys. WireGuard does similar key rotation via Noise's automatic rekey, but with a much simpler state machine — there's no in-band negotiation needed because both sides derive the new keys deterministically from the same DH shared secret material.
Hands-on exercise
Read a minimal server/client config pair and explain it.
Tools: any text editor. Runtime: 10 minutes.
Save the TLS-mode server and client configs from the section above into server.conf and client.conf. Then, line by line, explain to yourself:
- What does
dev tundo? Why notdev tap? - What does
proto udpset, and what would change about runtime behavior if you switched toproto tcp-server/proto tcp-client? - What's the difference between
certandca? What would happen if you reused the CA cert as the server cert? - The server has
tls-crypt tls-crypt.keyand the client has the same line. What is that key doing? What would happen if only one side had it? - The server pushes
route 192.168.10.0 255.255.255.0to the client. After the tunnel comes up, what kernel routing-table entry will the client have? - The server sets
data-ciphers AES-256-GCM:CHACHA20-POLY1305. What does the colon-separated list represent in negotiation terms?
Stretch: rewrite both configs to use proto tcp-server / proto tcp-client, and predict (without testing) what changes about the runtime behavior on a normal stable network and on a 5%-packet-loss path. (Hint: TCP-over-TCP meltdown.)
Optional: inspect supported ciphers.
openvpn --show-ciphers # list AEAD and legacy ciphers available
openvpn --show-tls # list TLS ciphersuites available
openvpn --show-curves # list supported elliptic curves for TLS
The output separates control-channel TLS options (set by --tls-cipher, --tls-ciphersuites, --ecdh-curve) from data-channel options (set by --data-ciphers). Confirming this separation in your own head is the goal — control and data channels have separate cryptographic configurations even though they share a transport flow.
Common misconceptions and traps
"OpenVPN is just slower WireGuard." OpenVPN solves a different deployment problem: arbitrary-PKI authentication, plugin extensibility, NAT-and-proxy tolerance, cross-platform sameness, and operator familiarity. WireGuard solves the "small tight performant tunnel between known peers" problem better, but doesn't address (and explicitly rejects scope-creep into) most of OpenVPN's flexibility surface. The right comparison isn't speed-versus-speed; it's "what shape of deployment do you have?" If your answer is "1000 employees, hardware tokens, certificate revocation lists, and three different MFA backends," WireGuard alone doesn't solve that — you'd build something on top.
"TCP mode is more reliable, so it's safer." TCP mode adds reliability you don't want at the wrong layer. The inner protocol (usually TCP itself) already handles reliability. Adding outer TCP creates the meltdown described earlier. UDP is the correct default; TCP is a workaround for hostile networks.
"TAP mode is more complete, so it's better." TAP bridges layer 2, which means broadcast and multicast traffic crosses the tunnel, which is almost always operational waste. The only legitimate uses for TAP are bridging physical Ethernet segments and supporting non-IP protocols. TUN is the correct default.
"A static key is fine because the tunnel is private." Static-key mode loses forward secrecy, identity management, revocation, and external-IDP integration. The "privacy" of the tunnel doesn't excuse those losses. For anything beyond a one-off site-to-site link, use TLS mode.
"Because it uses TLS, OpenVPN is basically HTTPS." OpenVPN borrows TLS for the control channel only. The data channel is its own framing on its own crypto. The outer transport may be UDP, which HTTPS never is. The flow doesn't look like HTTPS to a deep-packet-inspection tool — TLS-over-UDP on a non-443 port is one of the most recognizable VPN signatures there is. (tls-crypt helps obscure the handshake but doesn't hide the protocol family.)
"tls-auth and tls-crypt are the same thing with different names." They aren't. tls-auth HMACs the control-channel packets with a static key — the handshake is still visible to anyone who can sniff the wire. tls-crypt encrypts the entire handshake with the static key — the handshake is opaque to passive observers. For new deployments, prefer tls-crypt.
"OpenVPN's port 1194 is special." It's not. It's the default and that's it. There is nothing about the protocol that requires UDP/1194; servers commonly run on TCP/443 to look like HTTPS or on whatever port the operator picks. The IANA assignment is just an assignment.
"Rekeying drops the tunnel." It doesn't. The TLS renegotiation derives new keys without affecting the data channel's ability to carry traffic, modulo a brief key-ID-overlap grace window. If a deployment is dropping connections at rekey boundaries, the rekey window is misconfigured (or the platform's renegotiation handling is broken).
Where OpenVPN still makes sense
After WireGuard exists, after IPsec is honest about its niches, after Tailscale automates the NAT-traversal mesh case (covered in tailscale-and-wireguard-mesh — coming soon), what's left for OpenVPN?
Mature corporate PKI integration. If your organization has spent ten years building out X.509 issuance, CRL distribution, OCSP responders, smartcard logon, and certificate-based MFA, OpenVPN slots into that infrastructure naturally. WireGuard's "just exchange public keys" model is conceptually simpler but doesn't speak the same language as your existing identity stack — you end up building a control plane on top of WireGuard to do what OpenVPN's TLS-with-X.509 does out of the box.
Hostile network environments where TCP/443 is the only escape hatch. If your users connect from networks that block all UDP and all non-HTTPS-looking traffic, OpenVPN over TCP/443 is one of the few options that just works. It's slow on lossy paths because of TCP-over-TCP, but it's a tunnel that exists. (For deliberately censorship-resistant setups the better answer is usually a sing-box stack with REALITY or a similar TLS-camouflage protocol — see xray-reality-vs-wireguard — but those require more sophisticated server setup than dropping in OpenVPN does.)
Plugin ecosystem. OpenVPN has a long history of pluggable authentication: PAM, LDAP/Active Directory, RADIUS, OIDC, hardware token integration via plugins. WireGuard explicitly rejects this scope — it's a tunnel, not an auth platform — which means if your auth requirement is non-trivial, OpenVPN may genuinely be the simpler total-system choice.
Cross-platform consistency for the non-technical user. OpenVPN clients exist for every operating system in nearly identical form. Configuration files are portable. WireGuard is also cross-platform but its identity model (each device has its own keypair you must enroll) is harder for casual users to manage than "import this .ovpn file."
Inertia. Replacing a working VPN with a different working VPN is a project. Even if WireGuard is technically better, the migration cost — re-issuing client configs to thousands of users, retraining support staff, validating compliance against new audit logs — has to clear a real bar. Many operators correctly conclude that their working OpenVPN deployment doesn't need to be migrated until it breaks.
The argument against OpenVPN-for-new-deployments is real: WireGuard is faster, simpler, and harder to misconfigure. For a fresh setup serving technical users on a controlled set of devices, WireGuard (self-hosted-wireguard-2026) is the right starting point. But "all new VPN setups should be WireGuard" overstates the case. Operators should pick tools that match their organizational shape, not their preferred internet aesthetic. OpenVPN remains the right answer for a non-trivial fraction of real deployments.
Wrapping up
OpenVPN is the friendly compromise of VPN protocols: portable, flexible, well-understood, slower than it could be. Its design choices — user-space process, TLS for control, AEAD for data, single-port multiplexing, X.509 PKI, optional tls-crypt opacity — were the right answers for the deployment problems of 2005-2020 and remain the right answers for some deployment problems today. WireGuard relegates OpenVPN to legacy in the technical-purity sense; it doesn't actually replace OpenVPN in many real environments.
The next module (wireguard-from-first-principles — coming soon) goes through WireGuard byte by byte: how the Noise IK handshake actually works, why the cryptosuite is fixed, how the kernel implementation achieves its speed, and where WireGuard's deliberate scope limitations leave gaps you'll need to fill above the protocol layer.
Further reading
- OpenVPN 2.6 Manual — current option surface, deployment model, and DCO documentation.
- OpenVPN Cryptographic Layer — the canonical explanation of static-key versus TLS mode and control/data-channel separation.
- OpenVPN Protocol — wire-level packet framing and transport mechanics.
- OpenVPN Getting Started — current defaults, TLS minimums, and DCO setup.
- Computer Networks: A Systems Approach — useful systems-level context for why user-space tunnels pay performance costs and how DCO-style hybrids reclaim some of it.
// related reading
sing-box and Xray architecture
How sing-box and Xray actually work: inbounds, outbounds, routing, DNS, transport modules, and why these systems are frameworks, not one protocol.
WireGuard from first principles
Why WireGuard looks the way it does: Noise_IK, cryptokey routing, cookies, timers, and the design tradeoffs behind the modern minimalist VPN.
Self-hosting behind Cloudflare Tunnel without a public port
How to use Cloudflare Tunnel for published apps and private-network routes, when to use Access, and where Tunnel stops being the right tool.