[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5bda5b49-60a0-41d5-8bd9-c52794a645a8@intel.com>
Date: Mon, 6 Jan 2025 07:54:50 -0800
From: Dave Hansen <dave.hansen@...el.com>
To: Peter Zijlstra <peterz@...radead.org>, Borislav Petkov <bp@...en8.de>
Cc: Rik van Riel <riel@...riel.com>, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...a.com,
dave.hansen@...ux.intel.com, luto@...nel.org, tglx@...utronix.de,
mingo@...hat.com, hpa@...or.com, akpm@...ux-foundation.org,
nadav.amit@...il.com, zhengqi.arch@...edance.com, linux-mm@...ck.org
Subject: Re: [PATCH 01/12] x86/mm: make MMU_GATHER_RCU_TABLE_FREE
unconditional
On 1/4/25 08:27, Peter Zijlstra wrote:
>> Or should we make this unconditional on all native because we don't care about
>> the overhead and would like to have simpler code. I mean, disabling IRQs vs
>> batching and allocating memory...?
> The disabling IRQs on the GUP-fast side stays, it acts as a
> RCU-read-side section -- also mmu_gather reverts to sending IPIs if it
> runs out of memory (extremely rare).
>
> I don't think there is measurable overhead from doing the separate table
> batching, but I'm sure the robots will tell us.
We should _try_ to make it unconditional for simplicity if nothing else.
BTW, a few years back, some folks at Intel turned on
MMU_GATHER_RCU_TABLE_FREE and ran the usual 0day/LKP tests. I _think_ it
was when we were exploring the benefits of Intel's IPI-free TLB flushing
mechanism. We didn't find anything remarkable either way (IIRC).
Powered by blists - more mailing lists