[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <86o6qgxayt.wl-maz@kernel.org>
Date: Thu, 09 Oct 2025 18:04:58 +0100
From: Marc Zyngier <maz@...nel.org>
To: Thierry Reding <thierry.reding@...il.com>
Cc: Thomas Gleixner <tglx@...utronix.de>,
linux-tegra@...r.kernel.org,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org
Subject: Re: IRQ thread timeouts and affinity
On Thu, 09 Oct 2025 17:05:15 +0100,
Thierry Reding <thierry.reding@...il.com> wrote:
>
> [1 <text/plain; us-ascii (quoted-printable)>]
> On Thu, Oct 09, 2025 at 03:30:56PM +0100, Marc Zyngier wrote:
> > Hi Thierry,
> >
> > On Thu, 09 Oct 2025 12:38:55 +0100,
> > Thierry Reding <thierry.reding@...il.com> wrote:
> > >
> > > Which brings me to the actual question: what is the right way to solve
> > > this? I had, maybe naively, assumed that the default CPU affinity, which
> > > includes all available CPUs, would be sufficient to have interrupts
> > > balanced across all of those CPUs, but that doesn't appear to be the
> > > case. At least not with the GIC (v3) driver which selects one CPU (CPU 0
> > > in this particular case) from the affinity mask to set the "effective
> > > affinity", which then dictates where IRQs are handled and where the
> > > corresponding IRQ thread function is run.
> >
> > There's a (GIC-specific) answer to that, and that's the "1 of N"
> > distribution model. The problem is that it is a massive headache (it
> > completely breaks with per-CPU context).
>
> Heh, that started out as a very promising first paragraph but turned
> ugly very quickly... =)
>
> > We could try and hack this in somehow, but defining a reasonable API
> > is complicated. The set of CPUs receiving 1:N interrupts is a *global*
> > set, which means you cannot have one interrupt targeting CPUs 0-1, and
> > another targeting CPUs 2-3. You can only have a single set for all 1:N
> > interrupts. How would you define such a set in a platform agnostic
> > manner so that a random driver could use this? I definitely don't want
> > to have a GIC-specific API.
>
> I see. I've been thinking that maybe the only way to solve this is using
> some sort of policy. A very simple policy might be: use CPU 0 as the
> "default" interrupt (much like it is now) because like you said there
> might be assumptions built-in that break when the interrupt is scheduled
> elsewhere. But then let individual drivers opt into the 1:N set, which
> would perhaps span all available CPUs but the first one. From an API PoV
> this would just be a flag that's passed to request_irq() (or one of its
> derivatives).
The $10k question is how do you pick the victim CPUs? I can't see how
to do it in a reasonable way unless we decide that interrupts that
have an affinity matching cpu_possible_mask are 1:N. And then we're
left with wondering what to do about CPU hotplug.
>
> > Overall, there is quite a lot of work to be done in this space: the
> > machine I'm typing this from doesn't have affinity control *at
> > all*. Any interrupt can target any CPU,
>
> Well, that actually sounds pretty nice for the use-case that we have...
>
> > and if Linux doesn't expect
> > that, tough.
>
> ... but yeah, it may also break things.
Yeah. With GICv3, only SPIs can be 1:N, but on this (fruity) box, even
MSIs can be arbitrarily moved from one CPU to another. This is a
ticking bomb.
I'll see if I can squeeze out some time to look into this -- no
promises though.
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists