[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <23808da41421f3d95b65a2346ea7591631af322d.camel@surriel.com>
Date: Mon, 02 Dec 2024 13:10:20 -0500
From: Rik van Riel <riel@...riel.com>
To: Mathieu Desnoyers <mathieu.desnoyers@...icios.com>, Peter Zijlstra
<peterz@...radead.org>
Cc: kernel test robot <oliver.sang@...el.com>, oe-lkp@...ts.linux.dev,
lkp@...el.com, linux-kernel@...r.kernel.org, x86@...nel.org, Ingo Molnar
<mingo@...nel.org>, Dave Hansen <dave.hansen@...el.com>, Linus Torvalds
<torvalds@...ux-foundation.org>, Mel Gorman <mgorman@...e.de>
Subject: Re: [tip:x86/mm] [x86/mm/tlb] 209954cbc7:
will-it-scale.per_thread_ops 13.2% regression
On Mon, 2024-12-02 at 11:30 -0500, Mathieu Desnoyers wrote:
>
> Or we just build a per-cpu mm_cpumask from per-CPU state
> every time we want to use the mm_cpumask. But AFAIU this
> is going to be a tradeoff between:
>
> - Overhead of context switch at scale
>
> vs
>
> - Overhead of TLB flush
>
>
> So I guess what we end up doing really depends which scenario we
> consider
> most frequent.
>
I think that is going to be more workload dependent than
anything else.
If you're doing a kernel compile, or running a bunch of
shell scripts and simple Unix commands, you are dealing
mostly with single threaded programs, where not sending
IPIs is the best thing to do.
If you're running a long-lived, heavily multithreaded
program, you will benefit from reducing the context
switch overhead more than anything else.
Both seem like equally valid use cases.
I'm playing around with a patch now that builds on
my previous patches, but only trims the mm_cpumask
once a second.
Hopefully that can give us a reasonable medium between
the two.
--
All Rights Reversed.
Powered by blists - more mailing lists