[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87ms2nsqju.ffs@tglx>
Date: Thu, 08 Jan 2026 23:11:33 +0100
From: Thomas Gleixner <tglx@...nel.org>
To: Marc Zyngier <maz@...nel.org>, Waiman Long <longman@...hat.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>, Clark Williams
<clrkwllms@...nel.org>, Steven Rostedt <rostedt@...dmis.org>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
linux-rt-devel@...ts.linux.dev
Subject: Re: [PATCH] irqchip/gic-v3-its: Don't acquire rt_spin_lock in
allocate_vpe_l1_table()
On Thu, Jan 08 2026 at 08:26, Marc Zyngier wrote:
> Err, no. That's horrible. I can see three ways to address this in a
> more appealing way:
>
> - you give RT a generic allocator that works for (small) atomic
> allocations. I appreciate that's not easy, and even probably
> contrary to the RT goals. But I'm also pretty sure that the GIC code
> is not the only pile of crap being caught doing that.
>
> - you pre-compute upfront how many cpumasks you are going to require,
> based on the actual GIC topology. You do that on CPU0, outside of
> the hotplug constraints, and allocate what you need. This is
> difficult as you need to ensure the RD<->CPU matching without the
> CPUs having booted, which means wading through the DT/ACPI gunk to
> try and guess what you have.
>
> - you delay the allocation of L1 tables to a context where you can
> perform allocations, and before we have a chance of running a guest
> on this CPU. That's probably the simplest option (though dealing
> with late onlining while guests are already running could be
> interesting...).
At the point where a CPU is brought up, the topology should be known
already, which means this can be allocated on the control CPU _before_
the new CPU comes up, no?
Thanks,
tglx
Powered by blists - more mailing lists