lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <c23143c9edb2444e145849d46794d580715eeb8f.camel@siemens.com>
Date: Tue, 18 Nov 2025 09:51:02 +0000
From: "Sverdlin, Alexander" <alexander.sverdlin@...mens.com>
To: "o.rempel@...gutronix.de" <o.rempel@...gutronix.de>
CC: "a.fatoum@...gutronix.de" <a.fatoum@...gutronix.de>,
	"kernel@...gutronix.de" <kernel@...gutronix.de>, "linux-iio@...r.kernel.org"
	<linux-iio@...r.kernel.org>, "Jonathan.Cameron@...wei.com"
	<Jonathan.Cameron@...wei.com>, "wbg@...nel.org" <wbg@...nel.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"paulmck@...nel.org" <paulmck@...nel.org>, "bigeasy@...utronix.de"
	<bigeasy@...utronix.de>, "stable@...r.kernel.org" <stable@...r.kernel.org>
Subject: Re: [PATCH] counter: interrupt-cnt: Drop IRQF_NO_THREAD flag

Hi Oleksij!

On Tue, 2025-11-18 at 10:12 +0100, Oleksij Rempel wrote:
> > An IRQ handler can either be IRQF_NO_THREAD or acquire spinlock_t, as
> > CONFIG_PROVE_RAW_LOCK_NESTING warns:
> > =============================
> > [ BUG: Invalid wait context ]
> > 6.18.0-rc1+git... #1
> > -----------------------------
> > some-user-space-process/1251 is trying to lock:
> > (&counter->events_list_lock){....}-{3:3}, at: counter_push_event [counter]
> > other info that might help us debug this:
> > context-{2:2}
> > no locks held by some-user-space-process/....
> > stack backtrace:
> > CPU: 0 UID: 0 PID: 1251 Comm: some-user-space-process 6.18.0-rc1+git... #1 PREEMPT
> > Call trace:
> >   show_stack (C)
> >   dump_stack_lvl
> >   dump_stack
> >   __lock_acquire
> >   lock_acquire
> >   _raw_spin_lock_irqsave
> >   counter_push_event [counter]
> >   interrupt_cnt_isr [interrupt_cnt]
> >   __handle_irq_event_percpu
> >   handle_irq_event
> >   handle_simple_irq
> >   handle_irq_desc
> >   generic_handle_domain_irq
> >   gpio_irq_handler
> >   handle_irq_desc
> >   generic_handle_domain_irq
> >   gic_handle_irq
> >   call_on_irq_stack
> >   do_interrupt_handler
> >   el0_interrupt
> >   __el0_irq_handler_common
> >   el0t_64_irq_handler
> >   el0t_64_irq
> > 
> > ... and Sebastian correctly points out. Remove IRQF_NO_THREAD as an
> > alternative to switching to raw_spinlock_t, because the latter would limit
> > all potential nested locks to raw_spinlock_t only.
> > 
> > Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
> > Cc: stable@...r.kernel.org
> > Link: https://lore.kernel.org/all/20251117151314.xwLAZrWY@linutronix.de/
> > Fixes: a55ebd47f21f ("counter: add IRQ or GPIO based counter")
> > Signed-off-by: Alexander Sverdlin <alexander.sverdlin@...mens.com>
> > ---
> >   drivers/counter/interrupt-cnt.c | 3 +--
> >   1 file changed, 1 insertion(+), 2 deletions(-)
> > 
> > diff --git a/drivers/counter/interrupt-cnt.c b/drivers/counter/interrupt-cnt.c
> > index 6c0c1d2d7027d..e6100b5fb082e 100644
> > --- a/drivers/counter/interrupt-cnt.c
> > +++ b/drivers/counter/interrupt-cnt.c
> > @@ -229,8 +229,7 @@ static int interrupt_cnt_probe(struct platform_device *pdev)
> >   
> >   	irq_set_status_flags(priv->irq, IRQ_NOAUTOEN);
> >   	ret = devm_request_irq(dev, priv->irq, interrupt_cnt_isr,
> > -			       IRQF_TRIGGER_RISING | IRQF_NO_THREAD,
> > -			       dev_name(dev), counter);
> > +			       IRQF_TRIGGER_RISING, dev_name(dev), counter);
> >   	if (ret)
> >   		return ret;
> >   
> 
> Hm, I guess it will break the requirement to handle at least 10kHz
> interrupts. May be we should move only counter_push_event() to the
> thread? or using delayed worker?
> 
> Right now I do not have needed system for testing to come with better
> proposal.

I thought about possible performance implications of the patch.
But the performance regression would happen only with PREEMPT_RT.
However, it must have been broken (and by that I mean really broken, like
"scheduling in atomic") from the very beginning in PREEMPT_RT and
I suppose your initial tests were performed not with PREEMPT_RT kernel.

So overall there shall be no possible performance regression in reality.

-- 
Alexander Sverdlin
Siemens AG
www.siemens.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ