RouteHardenHire us
← Back to blog
Network Hardening··7 min read

Kernel-level packet filtering: XDP and eBPF basics

An operator-first introduction to XDP and eBPF packet filtering: where XDP sits in the path, what the actions mean, and when it beats nftables or tc.

XDP is not "iptables but faster."

That is the most important sentence in this whole article. If you miss it, the rest of the eBPF discourse turns into marketing fog.

XDP is an ingress hook at the network-device boundary. It exists so Linux can inspect, drop, pass, transmit, or redirect packets extremely early in the receive path. That makes it very good at some jobs and the wrong tool for many others.

If you already know you need rich stateful firewall policy, stay in nftables land. If you need packet triage before the normal stack even allocates a socket buffer, now XDP becomes interesting.

That distinction is the adult version of XDP hype control. Plenty of people hear "kernel fast path" and immediately try to treat it like a universal replacement for every other network primitive they already understand.

Where XDP sits in the path

The canonical XDP program-type docs explain the core point cleanly: XDP programs attach to network devices and run for every ingress packet received by that device.

In native driver mode, XDP runs before socket-buffer allocation. That is the whole performance story. Earlier work means less CPU and memory wasted on packets you were going to drop anyway.

It also changes observability.

When you drop a packet that early, it never becomes the kind of packet a later tool like tcpdump expects to see. The docs explicitly note that XDP_DROP can make packets invisible to tools like tcpdump. That is not a bug. It is the consequence of dropping them before the usual layers ever see them.

So the first operator lesson is:

  • later hooks are richer
  • earlier hooks are cheaper
  • earlier hooks also hide more from familiar tooling

If you like debugging by watching packets later in the stack, XDP will force you to become a little more deliberate.

The five actions that matter

XDP programs return one of a small set of actions. The program reference is worth reading because the semantics are simple and important.

XDP_PASS

Let the packet continue through the normal networking stack.

This is the "not my problem" answer, and it is what you use when your XDP program is only filtering a narrow class of traffic.

XDP_DROP

Drop the packet immediately.

This is the classic XDP use case: early rejection of traffic you do not want to spend CPU on.

For volumetric garbage or tiny stateless deny logic, this is exactly where XDP shines.

XDP_TX

Transmit the packet back out the same interface.

Useful for narrow cases, not the beginner path.

XDP_REDIRECT

Send the packet somewhere else.

The docs are explicit here: redirect is not magic by itself. It has to be paired with helpers like bpf_redirect_map() plus backing maps such as DEVMAP or XSKMAP.

XDP_ABORTED

This is the one to treat with respect.

The docs warn that XDP_ABORTED triggers the xdp_exception tracepoint and is expensive enough that you should not use it casually in production. In practice, if you are seeing a lot of aborted outcomes, you have a program-quality problem, not a packet-policy success story.

That is a good general mindset for eBPF work: if your control path depends on exceptional behavior, you do not yet have a production-grade design.

Native, generic, and offload modes

This is the section most beginner explainers skip, and it is where many bad expectations come from.

Native or driver mode

This is where XDP earns its reputation. You get execution in the driver path before the normal stack does more expensive work.

Generic or SKB mode

This mode works even without driver support, which sounds convenient until you notice the cost: the docs say generic mode negates most of XDP's speed advantage. At that point, you are much closer to later-stack processing and should seriously ask whether tc is the better hook.

Hardware offload

This is the glamorous one and the least generally useful one. It requires both driver and NIC support, supports only a subset of features, and comes with its own caveats. The docs also note incompatibilities like hardware-offloaded GRO and LSO needing to be disabled before attach.

So the honest operator answer is:

  • native mode is the interesting default
  • generic mode is a fallback, not a victory
  • offload mode is niche and hardware-sensitive

If you are in generic mode and feeling proud of having adopted XDP, pause and ask whether tc would have been the clearer hook all along.

When to use XDP, and when not to

Use XDP when the problem is:

  • early drop of obviously bad traffic
  • very small stateless packet decisions
  • fast redirect or steering
  • protecting CPU from junk before the normal stack pays for it

Do not reach for XDP first when the problem is:

  • rich stateful firewall policy
  • complex NAT logic
  • application-aware filtering
  • "I want to learn eBPF so I will rebuild a normal firewall badly"

That is where nftables and, in some cases, tc remain better fits. The mental model is not "newer replaces older." It is "different hook, different job."

This is the same reason I would still point most operators toward /blog/nftables-vs-iptables-vs-ufw for ordinary host policy. Rich firewalling is not a failure to use enough eBPF.

If you want a quick operator decision rule, use this:

  • need rich host firewall policy: nftables
  • need packet shaping or later-stack programmability: tc
  • need brutally early ingress triage: XDP

That is not mathematically perfect. It is operationally useful.

And operationally useful is the standard that matters when you are choosing a hook for a real system instead of a conference slide.

Clarity beats novelty here.

AF_XDP is the user-space branch

Once people hear "XDP," they quickly encounter AF_XDP. The kernel docs describe it as a socket family optimized for high-performance packet processing, using XDP redirect to steer frames into user space.

That is real and useful.

It is also a different operational animal from "attach a small drop program."

AF_XDP introduces:

  • UMEM management
  • RX/TX rings
  • Fill and Completion rings
  • user-space packet engine design

If your original question was "how do I cheaply drop obvious junk at ingress," AF_XDP is not the beginner branch. It is the "I am building a packet-processing system now" branch.

That can be exactly the right project. It is just not the same project as "I need a fast filter."

A tiny workflow you can test

The safe way to learn XDP is to attach, verify, count, and detach. Not benchmark first. Not rewrite your perimeter.

For example:

sudo xdp-loader load -m native -s xdp-drop-pass -d eth0 /usr/libexec/xdp-tools/xdp_drop.o
sudo bpftool net
sudo bpftool prog show
sudo xdp-loader unload -d eth0 --all

And remember the observability lesson:

sudo bpftool map show
sudo bpftool prog tracelog
ethtool -k eth0 | grep vlan-offload

If packets are being dropped early, the absence of traffic in tcpdump is not proof that nothing is happening. It may be proof that XDP is doing its job.

This is why counters matter. If you cannot point to a program, a map, or a tracepoint telling you what happened, you are operating on belief.

That is also why XDP demos can mislead people. A fast demo that drops packets is easy. A production workflow that proves what was dropped, why, and in which mode is the real engineering.

If your only success metric is packets-per-second in a lab, you are missing the harder part of the job.

Production systems need explanation as much as speed.

The operator opinion

XDP is worth learning because it makes the Linux packet path make more sense. Even if you never ship a production XDP program, understanding where it lives clarifies where nftables, tc, and user-space packet processing fit.

But keep the ambition proportional to the problem.

If you need a readable server firewall today, use nftables. If you need tiny, brutally early packet decisions at the driver edge, XDP is the right tool. If you are already in generic mode and calling it a performance win, you are mostly roleplaying.

That is the mental model I would keep.