TCP at the wire level
TCP byte-by-byte: three-way handshake, state machine, sequence numbers, retransmission, window scaling, FIN vs RST. Read packet captures with confidence.
The version of TCP that runs on a 2026 Linux box, by line count, is mostly the protocol Vinton Cerf and Bob Kahn published in RFC 793 in September 1981. The framing, the three-way handshake, the byte-level segment header — none of it has changed. Almost every refinement since then has been bolted on as an option or a sender-side algorithm choice, leaving the wire format intact. The fact that you can still pull a 1980s tcpdump trace and understand what's happening today is unusual in our industry, and it's a good thing.
This module is the in-depth wire-level reference. By the end of it you should be able to open a packet capture, read each TCP segment field by field, and explain what state each endpoint is in and why.
Prerequisites
You should already understand IPv4 (Module 1.3), basic IP forwarding (Module 1.5), and the role of UDP as TCP's lazier cousin (Module 1.6). If those don't ring a bell, read those first — TCP makes very little sense without them.
Learning objectives
After this module you should be able to:
- Trace a TCP three-way handshake from a
pcapfile to socket states on both endpoints. - Identify when a SYN-ACK retransmission indicates a NAT timeout vs an RTO recovery.
- Read the 9 control flags of a TCP segment and predict what the receiving stack will do.
- Explain why
TIME_WAITexists and when (very rarely) tuning it is appropriate. - Implement a minimal TCP-listening program and inspect its state-machine transitions with
ss -t.
What TCP is for
There are exactly two transport protocols that run on top of IP and matter at scale: UDP and TCP. UDP carries an IP packet's payload as-is, with two ports tacked on. TCP is everything else.
The job description for TCP fits in five lines:
- Turn an unbounded byte stream into segments small enough to fit in IP packets, and back again.
- Get every byte to the destination, in order, exactly once, even when the network drops, reorders, or duplicates packets.
- Don't send so fast that the receiver runs out of buffer space (flow control).
- Don't send so fast that intermediate routers run out of buffer space (congestion control).
- Detect when the connection is dead and tell the application.
Doing all of that simultaneously, on a network the protocol designers explicitly do not trust, is what makes TCP the most-studied transport protocol in computer science. Most of the apparent complexity is bookkeeping required by those five jobs. Once you see the bookkeeping, the protocol stops feeling complex — it feels inevitable.
The segment format
A TCP segment is a 20-byte fixed header followed by 0–40 bytes of options and then the payload. Every byte the kernel sends or receives over a TCP socket is wrapped in this:
0 1 2 3
0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Source Port | Destination Port |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Sequence Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Acknowledgment Number |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Data | |C|E|U|A|P|R|S|F| |
| Offset| Rsvd. |W|C|R|C|S|S|Y|I| Window |
| | |R|E|G|K|H|T|N|N| |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Checksum | Urgent Pointer |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| Options | Padding |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
| (Payload) |
+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+
Field by field, with the rationale for each:
Source port (16 bits). Identifies the sending application. The 5-tuple (src IP, src port, dst IP, dst port, protocol=TCP) uniquely identifies a connection on both endpoints. Kernels allocate ephemeral source ports for outbound connections out of a pool defined in /proc/sys/net/ipv4/ip_local_port_range (typically 32768–60999 on Linux).
Destination port (16 bits). The well-known service port (HTTP=80, HTTPS=443, SSH=22, …) on the receiving side. The first 1024 ports are privileged; binding to them on Linux requires CAP_NET_BIND_SERVICE or root.
Sequence number (32 bits). The position of the first byte of the payload in the sender's byte stream. Critical detail: the sequence number is per-direction, not per-connection. Each side maintains its own monotonically-increasing sequence space. SYN and FIN each consume one sequence number even though they carry no payload; everything else advances the SEQ by len(payload).
Acknowledgment number (32 bits). Only meaningful when the ACK flag is set (which it is on every segment after the initial SYN). It indicates the next sequence number the sender of this segment expects to receive. So an ACK of N means "I have received and buffered everything up to byte N-1." This is cumulative ACK: there's no field for "I got bytes 100–200 but not 50–99." Selective ACK (SACK) was added later as an option to fix this.
Data offset (4 bits). The header length in 32-bit words, including options. Minimum value is 5 (20-byte header), maximum is 15 (60 bytes — 40 bytes of options).
Reserved (3 bits). Zero, historically. Modern TCP uses these for ECN and for TCP-AO signaling, but for the wire format you'll see in 99% of captures, they're zero.
Flags (9 bits, 1 bit each).
- CWR (Congestion Window Reduced): sender saw an ECN-Echo, has reduced its cwnd.
- ECE (ECN-Echo): receiver saw an ECN-marked packet.
- URG (Urgent): the urgent pointer field is meaningful. Almost never used in practice; treat URG as evidence of an unusual application.
- ACK (Acknowledgment): the acknowledgment number field is meaningful.
- PSH (Push): tells the receiver "deliver buffered data to the application now, don't wait for more." A polite hint. Most implementations set PSH on the last segment of a write().
- RST (Reset): "kill this connection now, no orderly close." We'll come back to this.
- SYN (Synchronize): "I want to start a connection; this segment carries my initial sequence number."
- FIN (Finish): "I have no more data to send."
- NS (Nonce Sum): part of an ECN nonce mechanism almost no one implemented; effectively dead.
Window (16 bits). The number of bytes the sender of this segment is willing to receive beyond the acknowledgment number. This is flow control: it tells the other side "don't blast more than this." Native maximum is 64 KB, which became inadequate by the mid-1990s — see window scaling below.
Checksum (16 bits). A 16-bit one's-complement sum over the segment plus a pseudo-header (source IP, dest IP, protocol, length). The pseudo-header is the protocol's only crosscheck against IP-layer corruption and against forwarding errors that misroute a packet to the wrong host. Modern hardware offloads the computation; the kernel rarely touches it.
Urgent pointer (16 bits). Offset within the payload of the last "urgent" byte. See URG flag — almost never used.
Options (0–40 bytes). TLVs. The ones that actually appear on the wire today:
- MSS (Maximum Segment Size, kind 2): set on SYN segments, advertises the maximum payload bytes this end can receive in a single segment. For Ethernet-attached hosts this is usually 1460 (1500 MTU − 20 IP − 20 TCP).
- Window Scale (kind 3): a power-of-two multiplier (0–14) for the 16-bit Window field. With max scale 14, the effective window is 64 KB × 2^14 = 1 GB.
- SACK Permitted (kind 4) and SACK (kind 5): negotiates and uses Selective Acknowledgment.
- Timestamp (kind 8): sender includes a timestamp; receiver echoes it. Used for RTT measurement and for PAWS.
- TFO (TCP Fast Open, kind 34): a cookie that lets a client send data on the SYN. Rarely deployed end-to-end because middleboxes drop it.
Payload. Whatever the application wrote. Maximum size is IP MTU − IP header − TCP header. On standard Ethernet that's 1460 bytes per segment.
Read the layout above one more time. Most TCP debugging boils down to "which segment had which fields set, in which order" — internalize the byte layout and the rest is bookkeeping.
The three-way handshake
A connection is established by three segments. Conventionally A initiates and B accepts:
A → B SYN, seq=X (A's initial sequence number = X)
B → A SYN, ACK, seq=Y, ack=X+1 (B picks Y; ACK confirms A's SYN)
A → B ACK, seq=X+1, ack=Y+1
The +1 on each ACK reflects the rule that SYN consumes one sequence number even though it carries no payload. The first real byte of A's data is at sequence X+1, which is what B's ACK number expects.
Before the handshake completes both kernels are doing real work. On B, the listening socket creates a connection request entry on receipt of the SYN, allocates a buffer, picks Y, and sends the SYN-ACK. The connection is in SYN_RECV state, and the kernel will retransmit the SYN-ACK if it doesn't see A's final ACK within the RTO. (If you've ever wondered why a busy server has a non-zero number of SYN_RECV connections in ss -tan, that's why.)
Two practical questions:
Why three segments and not two? If A sent SYN and B replied with just ACK (saying "got your SYN"), B would never tell A what its own initial sequence number is. If B sent ACK + initial SEQ together (in one segment), A still couldn't know whether the segment was actually B's first response or a stale duplicate from a previous incarnation of the connection. The three-way handshake forces both sides to prove they can hear each other before either commits to delivering data. (This is also where SYN cookies come in — B can encode enough state in Y to verify A's existence before allocating any buffer at all.)
Where does X come from? From a kernel-internal random number generator constrained by RFC 6528. Modern Linux derives it from a hash of the 5-tuple, a per-boot secret, and the current time. The value is not sequential and not predictable, because predictable initial sequence numbers were once a real attack surface (RSTs spoofed by guessing where in the sequence space a connection currently lived).
A SYN-ACK retransmission is the most common reason you'll see the same SYN-ACK twice in a capture. Two causes look very similar but have different fixes:
- A's final ACK is dropped on the path A→B. B retransmits SYN-ACK on RTO timer. Fix: nothing the application can do — the network is at fault.
- A is behind a NAT whose 5-tuple state expired between the SYN and the ACK. B retransmits SYN-ACK; A's stack has no record of the connection and replies RST. Common on long-RTT cellular paths with ~30-second NAT timeouts. Fix: bring up keepalives or shrink your handshake timing.
You can usually distinguish them by noting whether A actually did receive the original SYN-ACK (look for the ACK A sent in response). If A sent the ACK, the network ate it. If A sent nothing or sent RST, the NAT lost the entry.
The state machine
A TCP endpoint is always in exactly one state. There are eleven non-trivial states plus the implicit CLOSED:
| State | Meaning |
|---|---|
CLOSED | No socket. (Not really a state, more the absence of one.) |
LISTEN | Server has bound and is waiting for SYN. |
SYN_SENT | Client has sent SYN and is waiting for SYN-ACK. |
SYN_RECV | Server has received SYN and sent SYN-ACK; waiting for final ACK. |
ESTABLISHED | Both sides have ACKed each other's SYN; data flows. |
FIN_WAIT_1 | Local side has sent FIN, waiting for ACK or peer's FIN. |
FIN_WAIT_2 | Local side's FIN has been ACKed; waiting for peer's FIN. |
CLOSING | Both sides sent FIN simultaneously; waiting for ACK of our FIN. |
TIME_WAIT | Peer's FIN has been ACKed; waiting 2×MSL for stragglers. |
CLOSE_WAIT | Peer sent FIN; we've ACKed; waiting for application to close. |
LAST_ACK | Application closed in CLOSE_WAIT; waiting for ACK of our FIN. |
Rules of thumb that you can use to read ss -t output:
LISTENandESTABLISHEDare the two states you should mostly see in a healthy server.- A heap of
SYN_RECVmeans handshake completion failures — investigate the network or a SYN flood. - A heap of
CLOSE_WAITmeans your application is leaking — it never calledclose()on a socket whose peer already closed. TIME_WAITis fine. It's actively helping by absorbing stragglers; we'll explain.LAST_ACKandCLOSINGare transient, you should rarely see more than a handful.
Sequence numbers and what PAWS defends against
The sequence space is 32 bits. At 1 Mbps a 32-bit space wraps every 9.5 hours; at 1 Gbps every 34 seconds; at 100 Gbps every 0.34 seconds. On any modern fast interface, sequence numbers wrap during the lifetime of a single long-running connection.
Wraparound is fine during the connection, because the receiver just keeps doing modulo-2³² arithmetic. But it creates a hazard with retransmissions: if a delayed segment from earlier in the connection is still in flight when the sequence space wraps back around to its old values, the receiver could mistake the stale segment for a current one.
PAWS (Protection Against Wrapped Sequences) fixes this. Both ends include the TCP Timestamp option on every segment. The receiver compares the segment's timestamp against the most recent timestamp it has seen on this connection; if the segment's timestamp is older, it's a stale duplicate and gets dropped. Because timestamps move monotonically forward and at a much smaller rate than sequence numbers, no real wraparound collision can pass the test.
PAWS is also the reason you can't just blindly disable timestamps on a high-throughput connection: doing so re-opens the wraparound hazard. (Disabling timestamps is occasionally proposed as a load-balancer workaround for connection-tracking issues; it's rarely the right answer.)
Window scaling, flow control, congestion control
The Window field is 16 bits. 64 KB was a generous receive window in 1981; it's a comically small fraction of the bandwidth-delay product on a 2026 link. A 100 ms RTT × 1 Gbps connection has a BDP of 12.5 MB — about 200× the unscaled max window. Without window scaling, the sender stalls every 64 KB waiting for ACKs and the connection runs at a tiny fraction of line rate.
The fix from RFC 7323 is the Window Scale option. On the SYN, each side advertises a scaling factor s (0–14). Both endpoints then interpret the 16-bit Window field as Window << s. With s=14, the maximum effective window is 1 GB. Modern Linux defaults to s=7 and grows it dynamically based on observed RTT.
A frequent confusion: Window is flow control, not congestion control. Flow control is "I'm the receiver; I have this much buffer left." Congestion control is "I'm the sender; I think the network has this much capacity." The two are tracked separately:
- The receive window (rwnd) is what the receiver advertises in the Window field.
- The congestion window (cwnd) is a sender-side variable. It's not on the wire.
The number of bytes the sender is allowed to have unacknowledged at any instant is min(rwnd, cwnd). Hitting either ceiling stalls the sender. Most well-tuned modern stacks are cwnd-limited at long RTT and rwnd-limited only on lossy or buffer-starved paths.
Module 1.8 covers cwnd algorithms (slow start, AIMD, Cubic, BBR) in detail. For now, the lesson is: when troubleshooting throughput, ask which window is the bottleneck. They have entirely different fixes.
Retransmission and RTO
When a segment is sent, the sender starts an RTO (retransmission timeout) timer. If the timer fires before the segment is acknowledged, the segment is sent again and the timer doubles (exponential backoff).
The RTO value is derived from the Round-Trip Time (RTT) using a smoothed-mean-and-deviation algorithm in RFC 6298. The crucial subtlety is Karn's algorithm: if a segment was retransmitted, you cannot use its ACK to update the RTT estimate. You don't know whether the ACK came from the original or the retransmission, and getting that wrong systematically biases RTT toward zero. The Timestamps option neatly sidesteps this by carrying the original send time on every retransmission.
Waiting for RTO is slow. To recover faster from non-pathological loss, Fast Retransmit treats three duplicate ACKs as a strong signal that a single segment was lost and retransmits immediately, without waiting for RTO. Modern stacks combine this with SACK to retransmit only the missing range, not everything from the loss point onward.
A capture trick: if you see TCP retransmissions and the time between the original and retransmission is "around 200 ms," it's probably an RTO. If it's "between 1 and 5 RTTs," it's almost certainly fast retransmit.
Connection teardown — FIN, RST, and the four-way wave
Closing a connection cleanly is symmetric and takes four segments:
A → B FIN, seq=X (A: "I'm done writing.")
B → A ACK, ack=X+1 (B: "Got it.")
B → A FIN, seq=Y (B: "I'm done writing too.")
A → B ACK, ack=Y+1 (A: "Goodbye.")
Either side can be the first to FIN. Simultaneous-close is legal and produces the CLOSING state. A FIN does not close the connection — it half-closes the direction. Each side independently signals "no more data coming" by sending FIN. Until both sides have FIN'd, data can keep flowing in the other direction. (HTTP/1.1 used this to let a server send the response after a client said it was done sending the request.)
A RST is the abrupt alternative. RST means "this connection is dead, abandon all buffered data, deliver no more application bytes." The kernel sends RST when:
- A SYN arrives for a port no one is listening on (responsive RST is the protocol's "ECONNREFUSED").
- An ACK arrives that doesn't match any open connection (frequently because of NAT timeouts).
- The application calls
close()on a socket with unread data in its buffer (Linux defaults to this —SO_LINGERcontrols it). - The kernel detects a sequence-number violation indicating a stale or spoofed packet.
Practical wire diagnostic: a RST after the three-way handshake usually means a middlebox is unhappy, not the endpoint. Common culprits are aggressive load balancers killing idle connections, stateful firewalls evicting their connection-tracking entries, and corporate proxies disagreeing with the application about TLS.
TIME_WAIT and why it's not your enemy
When the side that initiated the close finishes the four-way wave, it enters TIME_WAIT. It stays there for 2 × MSL (Maximum Segment Lifetime). Linux uses MSL = 60 seconds, so TIME_WAIT lasts 2 minutes by default. During those 2 minutes, the kernel cannot reuse the same (src IP, src port, dst IP, dst port) 5-tuple for a new connection.
Why hold onto a closed socket for that long?
- Stragglers. A delayed segment from the previous connection might still be in the network. If the 5-tuple is reused immediately, that straggler could be misinterpreted as data on the new connection.
- Final ACK loss. If the very last ACK (
A → B ACK, ack=Y+1) is dropped, B will retransmit its FIN. A needs to be in a state where it can answer that retransmission.TIME_WAITis that state.
TIME_WAIT becomes a problem only when you're opening short-lived connections at very high rates from one source IP — the source port pool can exhaust, and new connections fail with EADDRNOTAVAIL. The right fixes are, in order:
- Use connection pooling at the application layer (HTTP keep-alive, persistent SQL pools).
- Bind clients to multiple source IPs.
- Consider
SO_REUSEPORTon the server side. - As an absolute last resort,
net.ipv4.tcp_tw_reuse=1(only for outgoing connections; never enabletcp_tw_recycle, which has been broken-by-design for years and was removed in Linux 4.12).
Reflexively shrinking TIME_WAIT is one of the most common TCP "fixes" engineers reach for. Don't, until you've ruled out the others.
Hands-on: capture and read a handshake
The easiest path is tcpdump plus a tiny client. On a Linux box:
# Terminal 1: start capturing on the loopback interface, filtering to a single port.
sudo tcpdump -i lo -nn -X -s 0 'tcp port 9999' -w /tmp/tcp-1.pcap &
# Terminal 2: start a listener with netcat.
nc -l -p 9999 &
# Terminal 3: connect, send "hello", and exit.
echo "hello" | nc 127.0.0.1 9999
# Stop the capture (Ctrl-C the tcpdump in Terminal 1).
sudo killall tcpdump
Read the capture:
tcpdump -nn -r /tmp/tcp-1.pcap
You'll see something close to:
14:02:11.000001 IP 127.0.0.1.42891 > 127.0.0.1.9999: Flags [S], seq 1234567890, win 65495, length 0
14:02:11.000023 IP 127.0.0.1.9999 > 127.0.0.1.42891: Flags [S.], seq 9876543210, ack 1234567891, win 65483, length 0
14:02:11.000031 IP 127.0.0.1.42891 > 127.0.0.1.9999: Flags [.], ack 1, win 511, length 0
14:02:11.000099 IP 127.0.0.1.42891 > 127.0.0.1.9999: Flags [P.], seq 1:7, ack 1, win 511, length 6
14:02:11.000115 IP 127.0.0.1.9999 > 127.0.0.1.42891: Flags [.], ack 7, win 511, length 0
14:02:11.001102 IP 127.0.0.1.42891 > 127.0.0.1.9999: Flags [F.], seq 7, ack 1, win 511, length 0
...
Things to notice:
- The first three lines are the three-way handshake —
S,S.,.(the.means ACK). - After the first ACK, sequence numbers print as relative offsets by default (
seq 1:7means 6 bytes starting at relative offset 1). [P.]is PSH+ACK — the kernel set PSH on the last segment of the small write because there was no more buffered data.- The FIN appears as
F.(FIN+ACK).
For a deeper look, open the same pcap in Wireshark and let it color-code segments by stream. Then walk through the state machine for both sides — for each segment, ask "what state was the sender in before this segment, and what state is the sender in after?" If you can answer that for every segment in a real capture, you know TCP at the level this module aimed for.
Stretch exercise
Implement a minimal TCP listener in your language of choice using only the socket API — no framework, no HTTP library. Have it accept a connection, read the bytes the client sends until EOF, write them back uppercased, and close(). While it's running, watch its state machine evolve in another terminal:
watch -n 0.1 'ss -tan | grep :9999'
You should see LISTEN ⟶ ESTABLISHED ⟶ CLOSE_WAIT (when the client closes first) ⟶ LAST_ACK ⟶ disappearance. Or LISTEN ⟶ ESTABLISHED ⟶ FIN_WAIT_1 ⟶ FIN_WAIT_2 ⟶ TIME_WAIT if your code closes first. Both should make sense given the state diagram above.
Common misconceptions
"TCP guarantees data arrives at the application." TCP guarantees in-order delivery into the receiver's kernel buffer. The application can crash, get killed, or simply never read() before something else happens. Reliability is a property of the kernel's view, not the application's; design your protocols knowing the kernel can have data the application will never see.
"SO_KEEPALIVE keeps the connection from being closed." It doesn't keep anything alive; it probes for liveness. By default on Linux, the first probe goes out only after the connection has been idle for 7,200 seconds (2 hours). To detect dead peers usefully, you almost always want application-layer pings, not kernel keepalives.
"Fast retransmit means retransmit fast." Fast retransmit means retransmit before the RTO timer expires, when three duplicate ACKs make the loss obvious. It's "fast" relative to RTO, not relative to anything else.
"Disabling Nagle's algorithm makes the application faster." Nagle's algorithm batches small writes into fewer segments to reduce the per-segment overhead. Disabling it (TCP_NODELAY) gets each tiny write out the door immediately, which is what interactive protocols want — but it also tanks throughput on bulk-write workloads that depend on coalescing. Default-off-on-everything is wrong; default-on-with-a-known-exception-for-interactive is right.
"TIME_WAIT is a bug to be tuned away." TIME_WAIT is a correctness mechanism, not a leak. It only becomes a problem in a narrow case (high-rate short connections from one source IP), and the right fix is connection pooling, not lowering tcp_fin_timeout.
Further reading
- RFC 9293 — Transmission Control Protocol (current consolidated spec, 2022) — the working TCP standard. Replaces RFC 793 and absorbs decades of clarifications. Read the segment-format section first; the rest reads like a kernel-internal design doc.
- RFC 7323 — TCP Extensions for High Performance — Window Scale, SACK, Timestamps, PAWS. The reason your gigabit transfer doesn't crawl.
- RFC 5681 — TCP Congestion Control — slow start, AIMD, Reno. Module 1.8 covers cwnd algorithms in depth; this is the canonical primer.
- RFC 6528 — Defending against Sequence Number Attacks — how modern stacks pick ISNs.
- W. Richard Stevens, TCP/IP Illustrated, Volume 1, 2nd ed. — the textbook. Chapter 13 (TCP Connection Establishment and Termination) and chapter 14 (TCP Timeout and Retransmission) are still the clearest exposition in print.
- Linux
tcp(7)man page — the working engineer's reference for every sysctl, socket option, and behavioral quirk Linux has accumulated. - Janey Hoe, Improving the Start-up Behavior of a Congestion Control Scheme for TCP, SIGCOMM 1996 — historical, but still the cleanest explanation of why slow start exists in the form it does.
If you came in wanting to read packet captures fluently, you should now be able to. Module 1.8 picks up where this one leaves off: the algorithms that decide how fast to send, given everything you now know about reliable byte-ordered delivery.
// related reading
Auditing your network exposure with Nmap and ss
How to audit Linux network exposure the sane way: join local listener inventory from ss with remote reachability checks from Nmap instead of trusting only one view.
Chrony time sync for cryptographic correctness
How to configure chrony so TLS, DNSSEC, NTS, and other crypto-sensitive services stop failing for stupid clock reasons after boot and drift.
fail2ban and CrowdSec for VPN servers
How to choose between Fail2Ban and CrowdSec on public VPN gateways, when one tool is enough, and how to avoid two intrusion tools fighting over your firewall.