Skip to main content
Blog

Adaptive PADO and DHCP Offer delay

By 07/05/2025No Comments4 min read

What is Adaptive delay for PPPoE and IPoE

Every broadband line starts life with a brief “hello”: in PPPoE, modems send a PADI and wait for a PADO; in DHCP‑based IPoE, they broadcast a DISCOVER and wait for an OFFER. Whichever BNG replies first wins the customer. The 5×9 virtual BNG adds a twist—a small, automatically adjusted delay before sending its reply. A lightly loaded vBNG responds almost instantly; a busier one holds back a little longer. The delay is always clamped between operator‑defined minimum and maximum values to keep protocol behaviour predictable, yet it varies just enough to steer new sessions toward the gateways that have the most spare capacity.

Why it matters

Keeping subscriber counts evenly distributed is harder than it looks. Traffic patterns shift by the hour, firmware pushes reboot whole regions at once, and random fibre cuts can funnel thousands of reconnects toward a single gateway. Without active load‑balancing you end up with the familiar “one hot, one cold” problem: a few BNGs creeping toward session limits while others sit half‑empty.

Adaptive reply delay solves that in three complementary ways:

  • Self‑healing distribution. Because every vBNG “advertises” its readiness through reply speed, CPEs instinctively lock onto the quickest response. The moment a node starts hosting more than its share, its replies slow ever so slightly, nudging the next wave of subscribers toward a less‑busy peer. Over thousands of attachment events the effect is a gently oscillating equilibrium, keeping occupancy curves across the cluster almost flat.

  • Latency‑aware placement. Edge BNGs located closer to customers naturally respond faster than distant ones even before any artificial delay is added. Adaptive timing amplifies that advantage, so subscribers attach to the lowest‑RTT gateway available, shaving a handful of milliseconds off every packet for the life of the session.

  • Operational simplicity. Traditional load‑balancing tricks—DNS pinning, VLAN hashing, per‑access‑loop rules—create brittle dependencies and operational overhead. Reply‑based steering lives inside the protocol handshake itself: no extra tables to keep in sync, no router ACLs to debug, and no new failure modes to document. Engineers simply watch session counts equalise on their own.

The bottom‑line benefits ripple outward:

  • Higher port utilisation. When every node runs near the same occupancy, overall capacity stretches further, delaying expensive hardware expansions.

  • Consistent customer experience. Evenly loaded gateways avoid the latency spikes and throughput dips that creep in when one box is forced to handle a disproportionate share of flows.

  • Effortless scaling. Drop a brand‑new vBNG instance into the cluster and, the moment it starts answering quicker than its neighbours, it immediately begins attracting sessions—no orchestration scripts needed.

Centralized redundancy use case

Most operators maintain backup BNGs that should accept sessions only when local edge gateways are unavailable. Adaptive reply delay turns that requirement into a single, elegant setting: give the central nodes a substantially longer minimum delay—even as high as one full second.

Here’s what happens in practice:

  • Normal conditions – Edge vBNGs, sitting close to subscribers, reply in a few tens of milliseconds. Clients lock onto these fast responses, keeping traffic local and latency low. The central gateways hardly see any new sessions.

  • Partial overload – If an edge node grows busy, its adaptive logic lengthens the pause to protect itself. The core node’s one‑second baseline still keeps it out of play until the edge delay approaches that ceiling, allowing a graceful “pressure relief” rather than an abrupt hand‑off.

  • Site failure – When an entire edge site vanishes (power loss, fibre cut, etc.), since the modems don’t receive PADO or DHCP Offers from the edge, the slower central reply finally wins, onboarding subscribers in a controlled, last‑resort fashion.

  • Return to normal – As soon as the edge node comes back, its much faster replies immediately reacquire new sessions, and the central box’s load tapers off again without operator intervention.

Because the maximum delay is still bounded by protocol expectations, even a one‑second pause leaves ample margin before modem time‑outs. The approach provides deterministic fail‑over, prevents oscillation, and eliminates the need for complex routing ACLs, BGP prepends, or DNS juggling—yet it remains completely transparent to the customer.

Conclusion

Adaptive reply delay turns a millisecond‑scale pause into an elegant traffic director. By letting each vBNG broadcast its readiness through response speed—and by allowing core backups to use deliberate multi‑second delays—the network gains a built‑in, always‑on load balancer that spreads subscribers smoothly and activates redundancy only when truly needed. It shows that the simplest way to orchestrate millions of broadband sessions may be nothing more exotic than a well‑timed reply.