lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aV7tYRnVikZXAC23@pop-os.localdomain>
Date: Wed, 7 Jan 2026 15:33:53 -0800
From: Cong Wang <xiyou.wangcong@...il.com>
To: Stephen Hemminger <stephen@...workplumber.org>
Cc: netdev@...r.kernel.org, William Liu <will@...lsroot.io>,
	Savino Dicanosa <savy@...t3mfailure.io>
Subject: Re: [Patch net v6 4/8] net_sched: Implement the right netem
 duplication behavior

On Tue, Dec 30, 2025 at 09:28:50AM -0800, Stephen Hemminger wrote:
> On Sat, 27 Dec 2025 11:41:31 -0800
> Cong Wang <xiyou.wangcong@...il.com> wrote:
> 
> > In the old behavior, duplicated packets were sent back to the root qdisc,
> > which could create dangerous infinite loops in hierarchical setups -
> > imagine a scenario where each level of a multi-stage netem hierarchy kept
> > feeding duplicates back to the top, potentially causing system instability
> > or resource exhaustion.
> > 
> > The new behavior elegantly solves this by enqueueing duplicates to the same
> > qdisc that created them, ensuring that packet duplication occurs exactly
> > once per netem stage in a controlled, predictable manner. This change
> > enables users to safely construct complex network emulation scenarios using
> > netem hierarchies (like the 4x multiplication demonstrated in testing)
> > without worrying about runaway packet generation, while still preserving
> > the intended duplication effects.
> > 
> > Another advantage of this approach is that it eliminates the enqueue reentrant
> > behaviour which triggered many vulnerabilities. See the last patch in this
> > patchset which updates the test cases for such vulnerabilities.
> > 
> > Now users can confidently chain multiple netem qdiscs together to achieve
> > sophisticated network impairment combinations, knowing that each stage will
> > apply its effects exactly once to the packet flow, making network testing
> > scenarios more reliable and results more deterministic.
> > 
> > I tested netem packet duplication in two configurations:
> > 1. Nest netem-to-netem hierarchy using parent/child attachment
> > 2. Single netem using prio qdisc with netem leaf
> > 
> > Setup commands and results:
> > 
> > Single netem hierarchy (prio + netem):
> >   tc qdisc add dev lo root handle 1: prio bands 3 priomap 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
> >   tc filter add dev lo parent 1:0 protocol ip matchall classid 1:1
> >   tc qdisc add dev lo parent 1:1 handle 10: netem limit 4 duplicate 100%
> > 
> > Result: 2x packet multiplication (1→2 packets)
> >   2 echo requests + 4 echo replies = 6 total packets
> > 
> > Expected behavior: Only one netem stage exists in this hierarchy, so
> > 1 ping becomes 2 packets (100% duplication). The 2 echo requests generate
> > 2 echo replies, which also get duplicated to 4 replies, yielding the
> > predictable total of 6 packets (2 requests + 4 replies).
> > 
> > Nest netem hierarchy (netem + netem):
> >   tc qdisc add dev lo root handle 1: netem limit 1000 duplicate 100%
> >   tc qdisc add dev lo parent 1: handle 2: netem limit 1000 duplicate 100%
> > 
> > Result: 4x packet multiplication (1→2→4 packets)
> >   4 echo requests + 16 echo replies = 20 total packets
> > 
> > Expected behavior: Root netem duplicates 1 ping to 2 packets, child netem
> > receives 2 packets and duplicates each to create 4 total packets. Since
> > ping operates bidirectionally, 4 echo requests generate 4 echo replies,
> > which also get duplicated through the same hierarchy (4→8→16), resulting
> > in the predictable total of 20 packets (4 requests + 16 replies).
> > 
> > The new netem duplication behavior does not break the documented
> > semantics of "creates a copy of the packet before queuing." The man page
> > description remains true since duplication occurs before the queuing
> > process, creating both original and duplicate packets that are then
> > enqueued. The documentation does not specify which qdisc should receive
> > the duplicates, only that copying happens before queuing. The implementation
> > choice to enqueue duplicates to the same qdisc (rather than root) is an
> > internal detail that maintains the documented behavior while preventing
> > infinite loops in hierarchical configurations.
> > 
> > Fixes: 0afb51e72855 ("[PKT_SCHED]: netem: reinsert for duplication")
> > Reported-by: William Liu <will@...lsroot.io>
> > Reported-by: Savino Dicanosa <savy@...t3mfailure.io>
> > Signed-off-by: Cong Wang <xiyou.wangcong@...il.com>
> 
> It is worth testing for the case where netem is used as a leaf qdisc.
> I worry that this could cause the parent qdisc to get accounting wrong.
> I.e if HTB calls netem and netem queues 2 packets, the qlen in HTB
> would be incorrect.

In patch 6/8, I added "Test PRIO with NETEM duplication", which installs
netem Qdisc as a child and leaf of root prio qdisc.

Or am I misunderstanding it?

Regards,
Cong

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ