[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a7cffa2a68a7c9e40357b3300220c5eb0065e86b.camel@surriel.com>
Date: Fri, 08 Nov 2024 15:41:43 -0500
From: Rik van Riel <riel@...riel.com>
To: Dave Hansen <dave.hansen@...el.com>, Dave Hansen
<dave.hansen@...ux.intel.com>
Cc: Andy Lutomirski <luto@...nel.org>, Peter Zijlstra
<peterz@...radead.org>, Thomas Gleixner <tglx@...utronix.de>, Ingo Molnar
<mingo@...hat.com>, Borislav Petkov <bp@...en8.de>, x86@...nel.org, "H.
Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
kernel-team@...a.com
Subject: Re: [PATCH] x86,tlb: update mm_cpumask lazily
On Fri, 2024-11-08 at 12:31 -0800, Dave Hansen wrote:
>
>
> The only thing I can think of that really worries me is some kind of
> forked worker model where before this patch you would have:
>
...
> Where that IPI wasn't needed at *all* before. But that's totally
> contrived.
>
> So I think this is the kind of thing we'd want to apply to -rc1 and
> let
> the robots poke at it for a few weeks. But it does seem like a sound
> idea to me.
>
I am definitely hoping the robot will find something to throw at this
workload that I didn't think of.
Most of the workloads here are either single threaded processes, or
heavily multi-threaded processes.
For the worker process case, I would expect us to COW and flush a
number of pages in close time proximity to each other. In that case
the first IPI may get sent to an unnecessary CPU, but future IPIs
in that batch should not be.
If we don't send many invalidation IPIs at all, we probably don't
care that much.
If we do send a bunch, they often seem to happen in bursts.
I don't know if there are workloads where we send them frequently,
but not in bursts.
--
All Rights Reversed.
Powered by blists - more mailing lists