[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wj0HyNR+d+=te8x3CEApCDJFwFfb22DH5TAVyPArNK9Tg@mail.gmail.com>
Date: Sat, 30 Nov 2024 09:54:40 -0800
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Rik van Riel <riel@...riel.com>
Cc: kernel test robot <oliver.sang@...el.com>, oe-lkp@...ts.linux.dev, lkp@...el.com,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
Andy Lutomirski <luto@...nel.org>, Peter Zijlstra <peterz@...radead.org>
Subject: Re: [linus:master] [x86/mm/tlb] 7e33001b8b: will-it-scale.per_thread_ops
20.7% improvement
On Sat, 30 Nov 2024 at 09:31, Rik van Riel <riel@...riel.com> wrote:
>
> 1) Stop using the mm_cpumask altogether on x86
I think you would still want it as a "this is the upper bound" thing -
exactly like your lazy code effectively does now.
It's not giving some precise "these are the CPU's that have TLB
contents", but instead just a "these CPU's *might* have TLB contents".
But that's a *big* win for any single-threaded case, to not have to
walk over potentially hundreds of CPUs when that thing has only ever
actually been on one or two cores.
Because a lot of short-lived processes only ever live on a single CPU.
The benchmarks you are optimizing for - as well as the ones that regress - are
(a) made up micobenchmark loads
(b) ridiculously many threads
and I think you should take some of what they say with a big pinch of salt.
Those "20% difference" numbers aren't actually *real*, is what I'm saying.
> 2) Instead, at context switch time just update
> per_cpu variables like cpu_tlbstate.loaded_mm
> and friends
See aboive. I think you'll still want to limit the actual real
situation of "look, ma, I'm a single-threaded compiler".
> 3) At (much rarer) TLB flush time:
> - Iterate over all CPUs
Change this to "iterate over mm_cpumask", and I think it will work a
whole lot better.
Because yes, clearly with just the *pure* lazy mm_cpumask, you won
some at scheduling time, but you lost a *lot* by just forcing
pointless stale IPIs instead.
Linus
Powered by blists - more mailing lists