[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <de53f740-c662-45d4-9e00-66e06937f4c6@intel.com>
Date: Thu, 26 Jun 2025 11:08:25 -0700
From: Dave Jiang <dave.jiang@...el.com>
To: Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Cc: kernel-team@...a.com, dave.hansen@...ux.intel.com, luto@...nel.org,
peterz@...radead.org, bp@...en8.de, x86@...nel.org, nadav.amit@...il.com,
seanjc@...gle.com, tglx@...utronix.de, mingo@...nel.org
Subject: Re: [RFC PATCH v4 0/8] Intel RAR TLB invalidation
On 6/19/25 1:03 PM, Rik van Riel wrote:
> This patch series adds support for IPI-less TLB invalidation
> using Intel RAR technology.
>
> Intel RAR differs from AMD INVLPGB in a few ways:
> - RAR goes through (emulated?) APIC writes, not instructions
> - RAR flushes go through a memory table with 64 entries
> - RAR flushes can be targeted to a cpumask
> - The RAR functionality must be set up at boot time before it can be used
>
> The cpumask targeting has resulted in Intel RAR and AMD INVLPGB having
> slightly different rules:
> - Processes with dynamic ASIDs use IPI based shootdowns
> - INVLPGB: processes with a global ASID
> - always have the TLB up to date, on every CPU
> - never need to flush the TLB at context switch time
> - RAR: processes with global ASIDs
> - have the TLB up to date on CPUs in the mm_cpumask
> - can skip a TLB flush at context switch time if the CPU is in the mm_cpumask
> - need to flush the TLB when scheduled on a cpu not in the mm_cpumask,
> in case it used to run there before and the TLB has stale entries
>
> RAR functionality is present on Sapphire Rapids and newer CPUs.
>
> Information about Intel RAR can be found in this whitepaper.
>
> https://www.intel.com/content/dam/develop/external/us/en/documents/341431-remote-action-request-white-paper.pdf
>
> This patch series is based off a 2019 patch series created by
> Intel, with patches later in the series modified to fit into
> the TLB flush code structure we have after AMD INVLPGB functionality
> was integrated.
>
> TODO:
> - some sort of optimization to avoid sending RARs to CPUs in deeper
> idle states when they have init_mm loaded (flush when switching to init_mm?)
>
> v4:
> - remove chicken/egg problem that made it impossible to use RAR early
> in bootup, now RAR can be used to flush the local TLB (but it's broken?)
> - always flush other CPUs with RAR, no more periodic flush_tlb_func
> - separate, simplified cpumask trimming code
> - attempt to use RAR to flush the local TLB, which should work
> according to the documentation
> - add a DEBUG patch to flush the local TLB with RAR and again locally,
> may need some help from Intel to figure out why this makes a difference
> - memory dumps of rar_payload[] suggest we are sending valid RARs
> - receiving CPUs set the status from RAR_PENDING to RAR_SUCCESS
> - unclear whether the TLB is actually flushed correctly :(
Hi Rik,
Dave Hansen has asked me to reproduce this locally. Trying to replicate your test setup. What are the steps you are using to do testing of this patch series? Thanks!
DJ
> v3:
> - move cpa_flush() change out of this patch series
> - use MSR_IA32_CORE_CAPS definition, merge first two patches together
> - move RAR initialization to early_init_intel()
> - remove single-CPU "fast path" from smp_call_rar_many
> - remove smp call table RAR entries, just do a direct call
> - cleanups suggested (Ingo, Nadav, Dave, Thomas, Borislav, Sean)
> - fix !CONFIG_SMP compile in Kconfig
> - match RAR definitions to the names & numbers in the documentation
> - the code seems to work now
> v2:
> - Cleanups suggested by Ingo and Nadav (thank you)
> - Basic RAR code seems to actually work now.
> - Kernel TLB flushes with RAR seem to work correctly.
> - User TLB flushes with RAR are still broken, with two symptoms:
> - The !is_lazy WARN_ON in leave_mm() is tripped
> - Random segfaults.
>
Powered by blists - more mailing lists