lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <willemdebruijn.kernel.795154e3cfd@gmail.com>
Date: Wed, 20 Aug 2025 07:17:02 -0400
From: Willem de Bruijn <willemdebruijn.kernel@...il.com>
To: Xin Zhao <jackzxcui1989@....com>, 
 willemdebruijn.kernel@...il.com, 
 edumazet@...gle.com, 
 ferenc@...es.dev
Cc: davem@...emloft.net, 
 kuba@...nel.org, 
 pabeni@...hat.com, 
 horms@...nel.org, 
 netdev@...r.kernel.org, 
 linux-kernel@...r.kernel.org
Subject: Re: [PATCH net-next v4] net: af_packet: Use hrtimer to do the retire
 operation

Xin Zhao wrote:
> On Tue, 2025-08-19 at 22:18 +0800, Willem wrote:
> 
> > > -static void _prb_refresh_rx_retire_blk_timer(struct tpacket_kbdq_core *pkc)
> > > +static void _prb_refresh_rx_retire_blk_timer(struct tpacket_kbdq_core *pkc,
> > > +					     bool start, bool callback)
> > >  {
> > > -	mod_timer(&pkc->retire_blk_timer,
> > > -			jiffies + pkc->tov_in_jiffies);
> > > +	unsigned long flags;
> > > +
> > > +	local_irq_save(flags);
> > 
> > The two environments that can race are the timer callback running in
> > softirq context or the open_block from tpacket_rcv in process context.
> > 
> > So worst case the process context path needs to disable bh?
> > 
> > As you pointed out, the accesses to the hrtimer fields are already
> > protected, by the caller holding sk.sk_receive_queue.lock.
> > 
> > So it should be sufficient to just test hrtimer_is_queued inside that
> > critical section before calling hrtimer_start?
> > 
> > Side-note: tpacket_rcv calls spin_lock, not spin_lock_bh. But if the
> > same lock can also be taken in softirq context, the process context
> > caller should use the _bh variant. This is not new with your patch.
> > Classical timers also run in softirq context. I may be overlooking
> > something, will need to take a closer look at that.
> > 
> > In any case, I don't think local_irq_save is needed. 
> 
> Indeed, there is no need to use local_irq_save. The use case I referenced from
> perf_mux_hrtimer_restart is different from ours. Our timer callback does not run in
> hard interrupt context, so it is unnecessary to use local_irq_save. I will make this
> change in PATCH v6.
> 
> 
> 
> On Wed, 2025-08-20 at 4:21 +0800, Willem wrote:
>  
> > > So worst case the process context path needs to disable bh?
> > > 
> > > As you pointed out, the accesses to the hrtimer fields are already
> > > protected, by the caller holding sk.sk_receive_queue.lock.
> > > 
> > > So it should be sufficient to just test hrtimer_is_queued inside that
> > > critical section before calling hrtimer_start?
> > > 
> > > Side-note: tpacket_rcv calls spin_lock, not spin_lock_bh. But if the
> > > same lock can also be taken in softirq context, the process context
> > > caller should use the _bh variant. This is not new with your patch.
> > > Classical timers also run in softirq context. I may be overlooking
> > > something, will need to take a closer look at that.
> > > 
> > > In any case, I don't think local_irq_save is needed. 
> > 
> > 
> > 
> > 
> > I meant prb_open_block
> > 
> > tpacket_rcv runs in softirq context (from __netif_receive_skb_core)
> > or with bottom halves disabled (from __dev_queue_xmit, or if rx uses
> > napi_threaded).
> > 
> > That is likely why the spin_lock_bh variant is not explicitly needed.
> 
> Before I saw your reply, I was almost considering replacing spin_lock with
> spin_lock_bh in our project before calling packet_current_rx_frame in
> tpacket_rcv. I just couldn't understand why we haven't encountered any
> deadlocks or RCU issues due to not properly adding _bh in our project until
> I saw your reply.
> I truly admire your ability to identify all the scenarios that use the
> tpacket_rcv function in such a short amount of time. For me, finding all the
> instances where tpacket_rcv is assigned to prot_hook.func for proxy calls is
> a painful and lengthy task. Even if I manage to find them, I would still
> worry about missing some.

Thanks. I also reasoned backwards. If there had been a problem,
lockdep would have reported it long ago.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ