[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aGMZL+dIGdutt3Bf@pop-os.localdomain>
Date: Mon, 30 Jun 2025 16:09:35 -0700
From: Cong Wang <xiyou.wangcong@...il.com>
To: Xiang Mei <xmei5@....edu>
Cc: Jamal Hadi Salim <jhs@...atatu.com>, security@...nel.org,
Linux Kernel Network Developers <netdev@...r.kernel.org>
Subject: Re: sch_qfq: race conditon on qfq_aggregate (net/sched/sch_qfq.c)
Hi Xiang,
On Mon, Jun 30, 2025 at 11:49:02AM -0700, Xiang Mei wrote:
> Thank you very much for your time. We've re-tested the PoC and
> confirmed it works on the latest kernels (6.12.35, 6.6.95, and
> 6.16-rc4).
>
> To help with reproduction, here are a few notes that might be useful:
> 1. The QFQ scheduler needs to be compiled into the kernel:
> $ scripts/config --enable CONFIG_NET_SCHED
> $ scripts/config --enable CONFIG_NET_SCH_QFQ
> 2. Since this is a race condition, the test environment should have at
> least two cores (e.g., -smp cores=2 for QEMU).
> 3. The PoC was compiled using: `gcc ./poc.c -o ./poc -w --static`
> 4. Before running the PoC, please check that the network interface
> "lo" is in the "up" state.
>
> Appreciate your feedback and patience.
Thanks for your detailed report and efforts on reproducing it on the
latest kernel.
I think we may have a bigger problem here, the sch_tree_lock() is to lock
the datapath, I doubt we really need to use sch_tree_lock() for
qfq->agg. _If_ it is only for control path, using RTNL lock + RCU lock
should be sufficient. We need a deeper review on the locking there.
Regards,
Cong
Powered by blists - more mailing lists