RouteHardenHire us
← Back to blog
Corporate Networks··6 min read

Split DNS for internal services without breaking laptops

How to design split DNS for internal apps, office networks, and remote teams without turning every laptop into a DNS troubleshooting lab.

Remote access usually fails at DNS long before it fails at encryption.

The tunnel comes up. The route is present. The policy looks right. Then someone types grafana.company.internal and gets nothing, or worse, gets the wrong thing.

That is not a VPN problem. It is a naming problem.

If you run internal services for a distributed team, split DNS is the difference between "private access feels normal" and "everyone keeps a pastebin of IP addresses."

Two different DNS problems people keep mixing together

There are two separate jobs here:

  1. Naming overlay nodes inside your mesh
  2. Resolving internal corporate domains like corp.local, company.internal, or AD-backed zones

Those are related. They are not identical.

Tailscale MagicDNS solves the first problem well: every node gets a name, and users can reach monitoring instead of memorizing a CGNAT address. That is great for agent-managed devices.

It does not automatically solve every split-horizon DNS problem you have for arbitrary internal applications, Active Directory, or private zones behind a routing peer.

Treating those as the same system is how teams create resolver spaghetti.

What good split DNS looks like

A healthy remote-team setup usually looks like this:

  • public DNS remains the default for public names
  • internal zones are matched and forwarded only where needed
  • overlay node naming stays simple and separate
  • DNS traffic reaches the actual internal resolver through an explicit route

In practice, the target state is something like:

Public names        -> 1.1.1.1 / 9.9.9.9 / your normal public resolver
Mesh node names     -> mesh-provided DNS (MagicDNS, NetBird peer names, etc.)
company.internal    -> internal resolver reachable through a router or office peer
corp.local          -> AD-integrated DNS only for clients that need it

That is split DNS. Not "make the office resolver answer everything on earth."

When you do not need a full internal nameserver push

One of the better ideas in NetBird's internal DNS docs is the reminder that you often do not need to push a nameserver at all if you only need access to a few internal resources by domain name.

If the problem is "developers need to reach crm.corp.local and git.corp.local," routing those specific domains through a routing peer can be enough. The routing peer resolves the name on behalf of the client, as long as that routing peer itself can resolve the domain.

That is a much smaller blast radius than forcing every DNS query on every laptop through the corporate resolver.

Use a full internal nameserver distribution only when you genuinely need broad internal name resolution:

  • Active Directory environments
  • many internal services in one zone
  • applications that dynamically discover peers through internal DNS

If you only need five hostnames, do not design for five hundred.

The most common failure modes

1. You routed the zone, but not the DNS server

If your internal resolver lives on a private office subnet, clients must be able to reach that resolver's IP as well as the application names it serves.

NetBird's docs are explicit here: if the nameserver is behind a routing peer, you need both the nameserver configuration and a network resource or route that lets peers reach the DNS server itself.

Teams miss this constantly. They add the zone match, forget the path to 10.0.0.10:53, and then wonder why lookups time out.

2. You made the internal resolver the default for everything

This is the classic "corporate VPN" mistake.

Now every coffee-shop query, hotel captive portal check, GitHub lookup, and random public hostname depends on whether your internal resolver is reachable and fast. It also means off-network behavior gets weird in exactly the ways users cannot explain.

Public names should stay public unless you have a very deliberate reason otherwise.

3. You assumed CLI DNS tools behave like the OS

They often do not.

Tailscale's MagicDNS docs explicitly note that some macOS CLI tools such as host or nslookup bypass the system resolver and therefore do not reflect MagicDNS behavior. NetBird's DNS guides make a similar point: tools like dig and nslookup can diverge from what browsers and the OS resolver are actually using.

That means your debug flow should be:

  • test with the application itself
  • test with the OS resolver (resolvectl query, browser access, curl)
  • use dig only when you understand which server you are querying directly

Do not let one misleading nslookup ruin two hours of good troubleshooting.

4. You forced domain controllers to accept overlay DNS settings

If you run Active Directory, domain controllers should usually keep their own DNS truth. NetBird's docs even call out disabling DNS management for domain controller groups so those systems do not get their resolver behavior rewritten by the overlay.

That is the right instinct in any stack. Do not casually rewrite the resolver behavior of infrastructure that is supposed to be authoritative for the rest of your environment.

A sane design for small teams

If you are under 100 people, this is usually enough:

NeedBetter answer
Reach mesh-managed servers by nameUse MagicDNS or the mesh's built-in peer naming
Reach a handful of internal apps by nameRoute specific domains through a routing peer
Reach an internal zone like company.internalConfigure split DNS for that zone only
Reach AD-managed namespacesPush the AD resolver only to systems that need it
Support general internet browsingLeave public resolvers as the default

That table will prevent most self-inflicted DNS pain.

A practical test checklist

After you configure split DNS, verify in this order:

  1. curl or open the actual internal app by name
  2. Query the hostname with the OS resolver
  3. Confirm the internal resolver IP is reachable through the mesh or subnet router
  4. Check that public domains still resolve through public DNS
  5. Disconnect the tunnel and make sure the laptop behaves normally off-network

If step 5 fails, your DNS design is too invasive.

Where this meets network segmentation

Split DNS is not just convenience. It is part of access design.

If you name only what should be reachable, and route only the zones that matter, you are reinforcing a narrower trust boundary. If you teach every laptop to use one giant corporate resolver for everything, you are quietly rebuilding the old flat-network model.

This is why good DNS work pairs naturally with:

The RouteHarden opinion

The best split DNS design is the smallest one that makes internal services feel native.

Use mesh naming for mesh nodes. Use split-horizon DNS for actual internal zones. Route only the names and resolvers you need. Keep public DNS public. And never assume one nslookup output tells you what the operating system is really doing.

Your users do not care whether the answer came from MagicDNS, Active Directory, or an internal Unbound instance.

They care that grafana.company.internal works every time, on every laptop, without dragging the whole internet through the office.