[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1eefc8a3-a54f-44e7-ae60-739047230ac4@intel.com>
Date: Fri, 2 May 2025 09:36:13 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Steven Rostedt <rostedt@...dmis.org>,
Valentin Schneider <vschneid@...hat.com>, linux-kernel@...r.kernel.org,
kvm@...r.kernel.org, linux-arch@...r.kernel.org,
Josh Poimboeuf <jpoimboe@...nel.org>, Daniel Wagner <dwagner@...e.de>,
Sean Christopherson <seanjc@...gle.com>, Juergen Gross <jgross@...e.com>,
Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar <mingo@...hat.com>,
Borislav Petkov <bp@...en8.de>, x86@...nel.org,
"H. Peter Anvin" <hpa@...or.com>, Rik van Riel <riel@...com>
Subject: x86 RAR implementation
Trimming down the cc list (and oh, what a cc list it was!!!) to x86 folks.
On 5/2/25 08:20, Peter Zijlstra wrote:
> So where IPI is:
>
> - IPI all CPUs
> - local invalidate
> - wait for completion
To drill down on this a bit, the IPI is actually something like
for_each_cpu(IPI_cpumask)
per_cpu_ptr(cpu)->csd = 1;
send_ipi(IPI_cpumask)
// local invalidate
// wait for completion
... and the send_ipi() can be a for loop too if it's in clustered mode.
So there is at least _a_ for loop in this case in practice because each
CPU has a per-cpu structure to tell it what to do in the IPI.
> This then becomes:
>
> for ()
> - RAR some CPUs
> - wait for completion
Were you thinking that the "some CPUs" was limited to 64 because of the
size of the payload table and action vectors? Maybe I was thinking of
arranging the data structures differently.
I was figuring that we could use one entry in the payload table per IPI
operation, *not* one per CPU. Something like:
e = alloc_payload_entry();
payload_table[e] = payload;
for_each_cpu(RAR_cpumask)
per_cpu_ptr(cpu)->action_vector[e] = RAR_PENDING;
send_ipi(RAR_cpumask)
// local invalidate
// wait for completion
free_table_entry(e);
In that silly scheme, the allocation can fail. But in that case it's
easy to just fall back to IPIs. I _think_ that works, but it's all in my
head and maybe I'm missing something silly.
I think the mechanism you were thinking of was something like this
(diff'd from what I had above):
- e = alloc_payload_entry();
+ e = smp_processor_id();
payload_table[e] = payload;
for_each_cpu(RAR_cpumask)
per_cpu_ptr(cpu)->action_vector[e] = RAR_PENDING;
send_ipi(RAR_cpumask)
// local invalidate
// wait for completion
- free_table_entry(e);
That beats my scheme because it doesn't have any allocation, free or
locking overhead and can't fail to allocate. But it would be limited to
<=64 CPUs because the payload table and action vector are only 64
entries long.
Powered by blists - more mailing lists