lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87cy6m1xvc.ffs@tglx>
Date: Thu, 16 Oct 2025 20:53:59 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Thierry Reding <thierry.reding@...il.com>, Marc Zyngier <maz@...nel.org>
Cc: linux-tegra@...r.kernel.org, linux-arm-kernel@...ts.infradead.org,
 linux-kernel@...r.kernel.org
Subject: Re: IRQ thread timeouts and affinity

On Thu, Oct 09 2025 at 13:38, Thierry Reding wrote:
> We've been running into an issue on some systems (NVIDIA Grace chips)
> where either during boot or at runtime, CPU 0 can be under very high
> load and cause some IRQ thread functions to be delayed to a point where
> we encounter the timeout in the work submission parts of the driver.
>
> Specifically this happens for the Tegra QSPI controller driver found
> in drivers/spi/spi-tegra210-quad.c. This driver uses an IRQ thread to
> wait for and process "transfer ready" interrupts (which need to run
> DMA transfers or copy from the hardware FIFOs using PIO to get the
> SPI transfer data). Under heavy load, we've seen the IRQ thread run
> with up to multiple seconds of delay.

If the interrupt thread which runs with SCHED_FIFO is delayed for
multiple seconds, then there is something seriously wrong to begin with.

You fail to explain how that happens in the first place. Heavy load is
not really a good explanation for that.

> Alternatively, would it be possible (and make sense) to make the IRQ
> core code schedule threads across more CPUs? Is there a particular
> reason that the IRQ thread runs on the same CPU that services the IRQ?

Locality. Also remote wakeups are way more expensive than local wakeups.

Though there is no actual hard requirement to force it onto the same
CPU. What could be done is to have a flag which binds the thread to the
real affinity mask instead of the effective affinity mask so it can be
scheduled freely. Needs some thoughts, but should work.

> Maybe another way would be to "reserve" CPU 0 for the type of core OS
> driver like QSPI (the TPM is connected to this controller) and make sure
> all CPU intensive tasks do not run on that CPU?
>
> I know that things like irqbalance and taskset exist to solve some of
> these problems, but they do not work when we hit these cases at boot
> time.

I'm still completely failing to see how you end up with multiple seconds
delay of that thread especially during boot. What exactly keeps it from
getting scheduled?

Thanks,

        tglx
 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ