RouteHardenHire us
Back to blog
Anonymity Engineering··23 min read·advanced

OS and TCP/IP stack fingerprinting

How TCP SYN fields, TLS ClientHello structure, and HTTP/2 settings betray client identity even when the payload is encrypted.

The previous module showed that browsers leak identity through application-layer feature surfaces. This module steps below the browser to the operating system and the network stack, where the same general phenomenon — implementation details that protocols don't standardize — produces fingerprints at lower layers. A TCP SYN packet from a Linux 6.x kernel looks subtly different from one from Windows 11 or macOS 14. A TLS ClientHello from Firefox looks different from one from Chrome or curl. An HTTP/2 SETTINGS frame from one client library differs from another's. Each layer adds entropy; combined, they produce per-client identifiers that work even when payloads are encrypted and IP addresses are anonymized.

The architectural insight is that protocol RFCs define wire compatibility, not identical behavior. Two implementations that both follow the same RFC make different choices about defaults, option ordering, optional-feature inclusion, and edge-case handling. Those choices become signatures. The signatures are observable to anyone in the path; combining signatures from multiple layers produces a robust identifier that's hard to defeat by changing any single layer.

This module walks through the layered fingerprinting stack: TCP SYN-level fingerprinting (the classic p0f territory), TLS ClientHello fingerprinting (JA3 and JA4 — the canonical references already covered in ja3-ja4-tls-fingerprinting, so we'll go deeper into the architectural picture), HTTP/2 settings fingerprinting (the application-layer fingerprint that catches what TLS alone can miss), and the layer-fusion model where defenders and trackers combine all of them. The defensive implications: standardization at one layer doesn't help if the others still leak; anti-fingerprinting requires consistency across the stack; this is why VPNs and Tor alone don't address client identity.

Prerequisites

Learning objectives

  1. Explain how TCP option ordering, initial TTL, window sizing, MSS, timestamps, and related fields expose OS and stack behavior.
  2. Distinguish passive stack fingerprinting from active probing, and from higher-layer fingerprints such as JA4 and HTTP/2 settings signatures.
  3. Compare p0f-style TCP fingerprinting with JA4/JA4H and HTTP/2 settings fingerprints as layered observation tools.
  4. Evaluate how NATs, proxies, TLS libraries, and browsers reshape or blur the signal at each layer, and where the signals remain robust.

Why implementations leak identity

A protocol RFC specifies wire formats and required behaviors but rarely mandates exact internal choices. TCP, for example, says nothing about which order to put TCP options in the SYN, whether to include certain optional fields, what initial TTL to use, what initial window size to advertise. Each implementation makes its own choices.

The choices are stable per-implementation. Linux's TCP stack has used a particular SYN option ordering for decades; Windows uses a different one; macOS another. The choices reflect implementer preferences, historical baggage, performance optimizations, and platform conventions. None of them violate the RFC; all of them are observable.

When you combine these per-implementation choices into a "fingerprint," the result identifies the implementation (and often the version, since stacks change subtly across versions). Any observer in the network path can read these fields from the unencrypted lower layers and infer a substantial amount about the source — what OS, what version, sometimes what specific kernel build.

Importantly: this works passively. The observer doesn't have to send anything to the target; they just have to observe traffic the target is already sending. This makes the fingerprinting nearly invisible to the target. A normal-looking SYN to your normal-looking destination is being analyzed by every router and middlebox in the path.

The pattern repeats at higher layers. TLS ClientHellos from different libraries look different. HTTP/2 SETTINGS frames from different libraries differ. Each layer adds its own entropy, and combining them produces fingerprints with much higher specificity than any single layer.

The TCP SYN as a fingerprint surface

A TCP SYN packet contains the following fields that vary per OS:

Initial TTL. The Time-To-Live value at the IP layer that the OS sets before sending. Common values: Linux uses 64; Windows uses 128; routers/some embedded systems use 255. Some legacy systems use 32 or 60. The observed TTL at the destination is reduced by the number of hops; an observer can usually estimate the source TTL by rounding up to the nearest standard value (e.g., observed TTL 51 → source TTL 64 → likely Linux/Unix).

Window size. Initial TCP window size advertised in the SYN. Modern Linux uses 64240 or some derivative; Windows uses 8192 or 65535; macOS uses 65535. Old systems used various values. The window size combined with window scaling factor encodes the actual offered window.

Maximum Segment Size (MSS). The MSS option declares the largest segment the sender wants to receive. Standard values are 1460 (Ethernet MTU 1500 minus 40 byte TCP+IP header) or 1440 (after PPP overhead) or smaller for tunneled paths. The MSS itself isn't very identifying, but the way it's chosen often is.

Window scale factor. The Window Scale option declares how much to shift the window value. Linux typically advertises 7 or 8; Windows uses 8; macOS varies; many embedded systems use lower values. This is a per-OS choice in the typical range.

SACK Permitted. Whether Selective Acknowledgment is enabled. Most modern systems enable it; some legacy or constrained systems don't.

Timestamps option. Whether TCP Timestamps are advertised in the SYN. Most modern Unix-likes enable; Windows historically didn't but does now; some embedded systems don't.

ECN bits. Whether Explicit Congestion Notification capability is signaled. Modern stacks enable; some don't.

TCP Option order. This is the powerful one. The order in which TCP options appear in the SYN is a per-OS choice. A Linux SYN typically has options in one specific order (MSS, SACK, TS, NOP, WS); a Windows SYN has them in a different order (MSS, NOP, WS, NOP, NOP, SACK, TS or similar); macOS has yet another. The order is observable and stable per-OS.

Combining these fields, p0f-style fingerprinting can identify the source OS with high accuracy. Each field contributes a few bits of entropy; the option-order field alone has substantial discriminative power because the number of possible orderings is large but each OS uses only one.

The full p0f signature for a typical Linux SYN looks like:

sig = 4:64+0:0:1460:mss*44,7:mss,sok,ts,nop,ws:df,id+:0
       |  |  |  |    |       |                    |
       v  v  v  v    v       v                    v
       IP TTL ECN MSS  win-multiple  options-order  flags

The fields combine into a compact signature; p0f's database matches signatures to known OS profiles. Modern Linux 6.x kernels have a different signature than 5.x kernels (subtle changes); macOS Sonoma differs from Ventura; Windows 11 differs from Windows 10. The fingerprint is granular enough to reveal not just OS family but often version and patch level.

p0f and the logic of passive OS fingerprinting

p0f (Michal Zalewski's Passive OS Fingerprinter, current version 3) is the canonical tool for passive TCP/IP fingerprinting. It runs on a network interface in promiscuous mode, observes TCP SYNs, and matches them against a database of known signatures.

The architecture:

  1. Capture SYNs. p0f sniffs network traffic and identifies new TCP connection initiations.
  2. Extract features. For each SYN, extract IP TTL, IP fragmentation flags, TCP window size, window-scale option, MSS option, SACK permitted, timestamps, and TCP option order.
  3. Normalize. Round TTL to the nearest standard value to estimate source TTL through unknown hops. Convert window size to a multiple of MSS (revealing the OS's preferred BDP-aware tuning).
  4. Match. Compare the extracted signature against the database, which maps signatures to OS labels (e.g., "Linux 5.x: 64+0:0:1460:mss*44,7:mss,sok,ts,nop,ws:df,id+:0").
  5. Output. Emit a per-connection record: source IP, destination IP, identified OS, confidence.

p0f's database is community-maintained; new OS signatures are added as they're observed in the wild. Modern p0f maintains hundreds of distinct signatures across Linux distributions, Windows versions, macOS versions, BSD variants, embedded systems (routers, IoT devices, printers), and many oddities.

The "passive" in passive fingerprinting matters: p0f never sends anything. It just observes. This is operationally valuable because:

  • It's invisible to the target.
  • It works against connections the target initiates (so it captures clients, not just servers).
  • It accumulates information over time; many SYNs from the same client confirm the fingerprint.
  • It can run on any tap point in the path: an enterprise firewall, an ISP middlebox, a CDN edge.

The limits:

  • The fingerprint identifies the OS, not the user. Many users share the same OS; OS identification narrows the set but doesn't pinpoint individuals.
  • NATs and proxies can rewrite SYN options. A NAT in the path may normalize fields, blurring the source signature. Sophisticated NATs preserve original options; many simple NATs don't.
  • Tunnels (VPN, Tor) replace the SYN entirely. The SYN observed at the tunnel exit is the tunnel exit's, not the user's.
  • Stack modifications (custom TCP tuning via sysctl, tc qdisc-induced changes) can shift the fingerprint.

For an adversary trying to identify "what OS is this anonymous client?", p0f provides useful but not definitive information. For an adversary combining p0f with other layers (TLS fingerprint, HTTP/2 fingerprint, browser fingerprint), the combination becomes much more discriminative.

Limits of TCP-only thinking

TCP fingerprinting is one layer; the picture is incomplete without considering what changes the signal across the path:

Middleboxes and offload. Modern NICs with TCP segmentation offload (TSO) generate SYNs partly in hardware, which may differ from software-generated SYNs. Cloud network interfaces, hypervisor virtual NICs, and SDN gateways can normalize or modify TCP options. A SYN from a Linux VM running on AWS may not match the same kernel running on bare metal.

VPN tunnels. A VPN encapsulates user traffic in its own packets. The SYN observed by a third party between user and VPN provider is the user's SYN inside the VPN tunnel — but the VPN payload is encrypted, so the inner SYN isn't directly observable. The SYN observed between VPN exit and destination is the VPN exit's SYN — characterizing the VPN exit's OS, not the user's.

Proxies. HTTP proxies, SOCKS proxies, and TLS-terminating proxies all initiate their own TCP connections to destinations. The destination sees the proxy's TCP fingerprint, not the user's. The user's TCP fingerprint is observable on the user-to-proxy hop only.

Carrier-grade NAT (CGNAT). Cellular carriers and ISPs increasingly use CGNAT, which can rewrite TCP options. The SYN observed by destinations of cellular users may show CGNAT-normalized characteristics.

Application stacks that don't use the OS TCP stack. Some high-performance applications use user-space TCP stacks (DPDK, Snabb, custom kernel-bypass implementations). These produce TCP SYNs that don't match any standard OS signature.

Modern stack changes. Linux 6.x kernels changed several TCP defaults compared to 5.x; Windows 11 differs from 10; macOS upgrades change behavior. p0f databases lag behind kernel changes by weeks to months.

The takeaway: TCP fingerprinting is a useful signal but not a deterministic identifier. It's most effective when combined with other layers; alone, it produces probabilistic OS identification with caveats.

TLS fingerprinting as the next layer up

TLS ClientHellos are the second major layer. The detailed treatment lives in ja3-ja4-tls-fingerprinting; here's the architectural framing.

A TLS ClientHello includes:

  • TLS version
  • Cipher suite list (ordered)
  • Compression methods (almost always [0])
  • Extensions (ordered, each with type and content)
    • SNI (server name)
    • supported_groups (elliptic curves)
    • signature_algorithms
    • ALPN (application protocols)
    • key_share (TLS 1.3)
    • psk_key_exchange_modes (TLS 1.3)
    • many more

The fingerprint approach (JA3, JA4): hash the structural elements that don't depend on the user's choice of destination. The classic JA3 hashes:

TLS_version,cipher_list,extension_list,supported_groups,ec_point_formats

So a Firefox 128 ClientHello has a specific JA3; a Chrome 130 has a different one; curl 8.x has a different one. The hash is not unique to a specific user but identifies the TLS library and version.

JA4 (FoxIO's revision, 2023) addresses several JA3 limitations:

  • Extension shuffling. Modern browsers (Chrome especially) randomize extension order between ClientHellos to defeat fingerprinting. JA3 hashes the extension order, so randomization breaks the JA3. JA4 sorts extensions before hashing, producing a stable fingerprint despite shuffling.
  • Layered structure. JA4 produces a multi-component fingerprint (TLS version + ciphers hash + extensions hash + ALPN). Components can be analyzed independently.
  • Better SHA-256 instead of MD5. Stronger hash; harder for adversaries to forge collisions if needed.

JA4+ extends to other protocols:

  • JA4 = TLS ClientHello fingerprint.
  • JA4S = TLS ServerHello fingerprint.
  • JA4H = HTTP/2 client request fingerprint (header order, pseudo-headers).
  • JA4L = TLS server latency fingerprint.
  • JA4X = certificate-chain fingerprint.

The point: TLS fingerprinting catches what TCP fingerprinting misses. Two systems with identical TCP behavior (same kernel) but different applications (Chrome vs. Firefox) have identical p0f signatures but different JA4 fingerprints. Conversely, the same Chrome version on different OSes has the same JA4 but different p0f. Combining them is more identifying than either alone.

HTTP/2 and application-behavior fingerprints

HTTP/2 introduces yet another fingerprintable layer. The protocol's framing model means each client speaks HTTP/2 with subtly different choices:

SETTINGS frame. Each client sends a SETTINGS frame at connection start declaring its preferences:

  • SETTINGS_INITIAL_WINDOW_SIZE (initial flow-control window)
  • SETTINGS_MAX_CONCURRENT_STREAMS (limit on parallel streams)
  • SETTINGS_HEADER_TABLE_SIZE (HPACK dynamic table size)
  • SETTINGS_MAX_FRAME_SIZE (frame-size limit)
  • SETTINGS_ENABLE_PUSH (whether server push is allowed)

Different clients pick different values. Chrome's defaults differ from Firefox's; both differ from curl, libcurl, and most server-side HTTP/2 libraries.

Pseudo-header order. HTTP/2 requires specific pseudo-headers (:method, :path, :scheme, :authority) but doesn't mandate the order. Different clients send them in different orders.

Header frame ordering. The order in which a client sends regular headers within a HEADERS frame can differ between implementations.

WINDOW_UPDATE timing. When a client sends WINDOW_UPDATE frames to manage flow control reveals the implementation's flow-control logic.

PRIORITY frame usage. Whether a client sends explicit PRIORITY frames and which dependency tree it builds.

HPACK encoding choices. HPACK supports multiple encodings for the same logical header; clients pick differently (literal vs. indexed, Huffman-encoded values vs. raw, etc.).

Combining these into a fingerprint produces an HTTP/2 client identifier (the JA4H spec captures much of this). The fingerprint distinguishes browsers from libraries from custom clients.

The Black Hat Europe 2017 paper "Passive Fingerprinting of HTTP/2 Clients" (Shuster and Shulman) demonstrated that HTTP/2 settings alone could distinguish browser families with high accuracy. Modern fingerprinting combines HTTP/2 with TLS and TCP for layered specificity.

The implication: even if you spoof your User-Agent (so the User-Agent header lies), the HTTP/2-level behavior gives you away. The HTTP/2 implementation can't easily lie about its SETTINGS values without breaking compatibility; the lower-layer TLS and TCP fingerprints can't lie at all without modifying the underlying libraries.

Layer fusion in the real world

The powerful observation: defenders and trackers don't stop at one layer. They combine TCP, TLS, HTTP, and application-layer fingerprints into a unified identity.

A typical fusion approach for a CDN edge:

  1. Capture TCP SYN. Compute p0f-style fingerprint. → "Linux 6.x" or "Windows 11" etc.
  2. Capture TLS ClientHello. Compute JA4 fingerprint. → "Chrome 130 on Windows" or "Firefox 128 on Linux" etc.
  3. Capture HTTP/2 SETTINGS and first request. Compute JA4H fingerprint. → "Chrome 130 with default settings"
  4. Combine. The trio (TCP+TLS+HTTP/2) is much more identifying than any single fingerprint.
  5. Match against historical observations. A specific (TCP, TLS, HTTP/2) tuple seen multiple times from the same source IP is probably the same client; a tuple seen from many IPs may identify a single client through different VPN exits.

The fusion is what makes evasion hard. To avoid all three fingerprints simultaneously, a client would have to:

  • Modify the TCP stack to match a chosen target OS (kernel-level changes; possible but rare).
  • Modify the TLS library to match a chosen target browser (uTLS, Reality, naïveproxy do this).
  • Modify the HTTP/2 implementation to match a target's SETTINGS and ordering (also possible but requires careful work).

Tools like uTLS (Go library that mimics specific browsers' TLS fingerprints) and curl-impersonate (a fork of curl that produces Chrome/Firefox/Edge ClientHellos) address the TLS layer. Combining them with TCP-stack mimicry is rarely done; the TCP layer is usually left to whatever the OS provides. This means a curl-impersonate request from Linux still has a Linux TCP fingerprint with a Chrome TLS fingerprint — a combination that doesn't exist in the wild and is itself fingerprintable.

The escalation: defenders detect the inconsistency and flag clients with mismatched-layer fingerprints. The classic example is bot detection: a script claiming to be Chrome on Windows but having Linux TCP fingerprints is almost certainly a bot.

The defensive cost of multi-layer fingerprint resistance is substantial. To convincingly impersonate a real Chrome-on-Windows browser, a tool would need to:

  • Run on Windows (for the TCP stack), or modify the TCP stack on a non-Windows OS.
  • Use a TLS library that matches Chrome's TLS exactly (uTLS or equivalent).
  • Use an HTTP/2 implementation that matches Chrome's behavior.
  • Match Chrome's HTTP/1.1 fallback behavior if HTTP/2 isn't negotiated.
  • Match Chrome's connection re-use, prioritization, and other higher-layer behaviors.

This is doable for sophisticated adversaries (red teams, well-funded research projects) but is too much engineering for casual evasion. Most "I want to look like a real browser" tools settle for matching the TLS layer and accept that the TCP and HTTP/2 layers may give them away.

Defensive implications for anonymity engineering

The synthesis: anti-fingerprinting requires standardization at multiple layers, not just hiding IPs or encrypting content.

For anonymity engineering specifically:

Tor Browser standardizes browser-layer fingerprints (covered in the previous module). It does not standardize TCP-stack fingerprints — your Tor Browser's TCP SYN to the guard relay carries your OS's standard fingerprint. This is acceptable because the guard relay is the only observer of that SYN; the rest of the path doesn't see your TCP layer (it sees Tor cells).

Tor's network architecture neutralizes TCP fingerprinting at the destination. The destination sees the exit relay's TCP SYN, not yours. Whatever OS you're using, the destination sees a TCP fingerprint that matches the exit relay's OS. This is good.

TLS fingerprinting matters end-to-end through Tor. When Tor Browser establishes a TLS connection to a destination through the exit, the TLS handshake is end-to-end (the exit doesn't terminate it). The destination sees Tor Browser's TLS ClientHello, which is standardized to be Tor Browser's. Tor Browser users converge on the same JA4 fingerprint, providing crowd protection at the TLS layer.

HTTP/2 fingerprinting also matters end-to-end. Same reasoning as TLS; the HTTP/2 SETTINGS that Tor Browser sends are observable to the destination. Tor Browser standardizes HTTP/2 settings for crowd protection.

For VPN users (not Tor users), the picture is different. A VPN exit terminates and re-originates TCP connections; the destination sees the VPN's TCP fingerprint, which is the OS the VPN exit runs (typically Linux variants). This standardizes TCP at the cost of grouping all VPN users from that exit together. TLS and HTTP/2 are end-to-end through the VPN tunnel; the destination sees the user's actual TLS and HTTP/2 fingerprints. So a VPN doesn't address TLS or HTTP/2 fingerprinting.

For sophisticated evasion (e.g., REALITY-based censorship circumvention), TLS impersonation is essential. The whole point of REALITY is to make TLS connections to a censored destination look indistinguishable from TLS connections to a legitimate destination. uTLS-based impersonation matches the TLS fingerprint of a real browser; the TCP fingerprint may or may not match (depends on what OS the censorship-evasion client runs on).

The general principle: address the layers your threat model requires. Pure transport anonymity (Tor) is largely sufficient when combined with browser standardization. Censorship evasion requires TLS impersonation. Multi-tracker resistance requires browser standardization plus careful traffic-shape attention. The right defense depends on which observers you're hiding from.

Hands-on exercise

Inspect a SYN packet for fingerprinting features.

Tools: tcpdump, tshark. Runtime: 15 minutes.

In one terminal:

sudo tcpdump -nn -i any -w /tmp/syn.pcap "tcp[tcpflags] & tcp-syn != 0 and tcp[tcpflags] & tcp-ack == 0" &
TCPDUMP_PID=$!

In another terminal, initiate a few TCP connections:

curl -o /dev/null -s https://example.com
nc -z -w 1 1.1.1.1 443
ssh -o ConnectTimeout=2 root@127.0.0.1 2>/dev/null

Stop tcpdump:

kill $TCPDUMP_PID
wait $TCPDUMP_PID 2>/dev/null

Decode SYNs with tshark:

tshark -r /tmp/syn.pcap -V \
  -O tcp \
  -Y "tcp.flags.syn == 1 and tcp.flags.ack == 0" 2>/dev/null | head -60

For each SYN, identify:

  • IP TTL (look at Time to live)
  • TCP window size (Window size value)
  • TCP option order (the Options section)
  • MSS option value
  • Window Scale value
  • SACK Permitted (yes/no)
  • Timestamps option present (yes/no)

Compare two SYNs from your machine. They should be similar (same OS, same TCP defaults). Then if you have a different machine handy (e.g., a Windows VM), capture a SYN from there and compare. The differences in TTL, window size, option order, and option choices are the fingerprint surface p0f exploits.

Stretch: install p0f (brew install p0f on macOS, apt install p0f on Debian/Ubuntu) and run it on the same pcap:

p0f -r /tmp/syn.pcap

p0f's output will include identified OS labels for each SYN.

Compare layered fingerprints conceptually.

Tools: notes. Runtime: 10 minutes.

For the same client (e.g., Chrome 130 on Linux 6.x), write what each of the following observers sees:

  • TCP SYN observer (e.g., the user's ISP, a router on the path). Sees: source IP, destination IP, TCP options including order, TTL, window size. Can fingerprint the OS as Linux 6.x via p0f.
  • TLS observer (anyone in the path before TLS terminates). Sees: SNI, TLS version, cipher list, extension list. Can fingerprint as Chrome 130 via JA4.
  • HTTP/2 observer (the destination service or a TLS-terminating intermediary). Sees: SETTINGS frame, pseudo-header order, request structure. Can fingerprint as Chrome 130 via JA4H.

Now consider how the picture changes with a VPN:

  • TCP SYN at user's ISP. Sees: source IP user's, destination IP VPN's. Cannot see the inner TCP because it's encrypted.
  • TCP SYN between VPN exit and destination. Sees: source IP VPN exit's, destination IP destination's. Can fingerprint VPN exit's OS, not user's.
  • TLS (end-to-end). Same as before; Chrome JA4 fingerprint visible to destination.
  • HTTP/2 (end-to-end). Same as before; Chrome JA4H fingerprint visible to destination.

What changes with Tor:

  • TCP at user's ISP. Sees user's TCP SYN to guard relay. Can fingerprint user's OS.
  • TCP between exit and destination. Sees exit's TCP SYN. Can fingerprint exit's OS.
  • TLS (end-to-end). Tor Browser's standardized JA4 visible to destination.
  • HTTP/2 (end-to-end). Tor Browser's standardized JA4H visible to destination.

The point: which layers are addressed depends on the architecture. Tor addresses TCP-fingerprint at the destination by terminating; VPN does the same. Both leave end-to-end TLS and HTTP/2 fingerprints intact. Anti-fingerprinting browsers (Tor Browser, Mullvad Browser) standardize the end-to-end fingerprints. The combination is what addresses the layered threat.

Common misconceptions and traps

"TCP fingerprinting is obsolete because TLS encrypts everything." TLS doesn't encrypt the TCP SYN — the SYN is a TCP-layer packet, with TLS not yet started. TCP fingerprinting works on the SYN itself, before any TLS exists. TLS fingerprinting is a different layer that addresses what TCP fingerprinting can't.

"Fingerprinting identifies exact OS versions perfectly." Many fingerprints are probabilistic. NATs, proxies, custom TCP tunings, and stack modifications can shift the signal. p0f gives a confidence score; high-confidence matches are reliable, but many real-world signatures are blends or partial matches.

"JA3 and JA4 are just malware-hunting tools." They started in security-research and intrusion-detection contexts but are also a privacy and identification surface for ordinary clients. Many CDNs and trackers use TLS fingerprinting to classify legitimate users, not just to detect malicious traffic.

"If I change my IP, stack fingerprinting no longer matters." The stack identity is independent of IP. A user behind three different VPN exits can be linked to a single OS through their consistent TCP fingerprint at the user-to-VPN hop, and to a single browser through their consistent TLS+HTTP/2 fingerprint at the VPN-exit-to-destination hop.

"One fingerprint layer is enough." Real observers combine TCP, TLS, and HTTP/2 signals. A defense that addresses one layer leaves the others available. uTLS-based TLS impersonation without TCP-layer alignment produces "Chrome TLS on Linux TCP" — a mismatched profile that's itself fingerprintable.

"My VPN handles all this for me." A VPN handles the network-location layer. It does not standardize your TLS or HTTP/2 fingerprints (those are end-to-end). It does not standardize your TCP fingerprint between you and the VPN provider (the VPN provider sees it). A VPN is one defense layer among many.

"Linux is harder to fingerprint than Windows because it's more diverse." Diversity helps somewhat, but specific kernel versions are still distinguishable. Linux 6.x has its own signature; specific distributions add minor variations from kernel patches. Linux is fingerprintable; the question is granularity, not whether.

"Modern browsers randomize TLS extensions to defeat fingerprinting." Some do (Chrome's TLS extension shuffling), but JA4 was specifically designed to be stable under such shuffling. Randomization addresses JA3 but not JA4. The arms race continues.

"Network observers can't see HTTP/2 because it's inside TLS." Network observers between the client and the TLS-terminating intermediary can't see HTTP/2. The destination service (or any TLS-terminating proxy) does see it. CDNs see HTTP/2 from their clients; large platforms see HTTP/2 from their users. The fingerprint is visible to anyone who terminates the TLS.

Wrapping up

OS and TCP/IP stack fingerprinting works at the lower layers of the stack, identifying client implementations through the small choices each implementation makes about wire-protocol details: TTL, window size, TCP option order, TLS extension structure, HTTP/2 SETTINGS, and dozens more. Each layer adds entropy; the layers compose into identifiers that work across IP changes and survive most casual privacy measures.

The defensive picture for anonymity engineering: addressing fingerprinting requires standardization at every relevant layer. Tor handles the TCP layer at the destination (because the exit re-originates TCP); browser standardization (Tor Browser, Mullvad Browser) handles TLS and HTTP/2 fingerprints. VPNs handle TCP at the destination but leave TLS and HTTP/2 alone. Pure tracker resistance requires the full stack; partial defenses leave proportionally more identifying information visible.

Sophisticated evasion (uTLS, REALITY, naïveproxy) addresses the TLS layer with library-level impersonation. Without coordinated TCP and HTTP/2 alignment, the impersonation can produce mismatched-layer profiles that are themselves identifying. Convincing complete impersonation requires running on the OS being impersonated and using libraries that match every layer.

Track 5 (Detection and Censorship) goes deeper into active probing, deep packet inspection, and the specific techniques nation-state firewalls use to identify and block circumvention traffic — the active counterpart to this module's passive layered fingerprinting.

Further reading