[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhcnzS1S6zOMJwSL@casper.infradead.org>
Date: Thu, 11 Apr 2024 00:59:09 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Peter Xu <peterx@...hat.com>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Suren Baghdasaryan <surenb@...gle.com>,
Lokesh Gidra <lokeshgidra@...gle.com>,
"Liam R . Howlett" <Liam.Howlett@...cle.com>,
Alistair Popple <apopple@...dia.com>
Subject: Re: [PATCH] mm: Always sanity check anon_vma first for per-vma locks
On Wed, Apr 10, 2024 at 05:23:18PM -0400, Peter Xu wrote:
> On Wed, Apr 10, 2024 at 10:10:45PM +0100, Matthew Wilcox wrote:
> > > I can do some tests later today or tomorrow. Any suggestion you have on
> > > amplifying such effect that you have concern with?
> >
> > 8 socket NUMA system, 800MB text segment, 10,000 threads. No, I'm not
> > joking, that's a real customer workload.
>
> Well, I believe you, but even with this, that's a total of 800MB memory on
> a giant moster system... probably just to fault in once.
>
> And even before we talk about that into details.. we're talking about such
> giant program running acorss hundreds of cores with hundreds of MB text,
> then... hasn't the program developer already considered mlockall() at the
> entry of the program? Wouldn't that greatly beneficial already with
> whatever granule of locks that a future fault would take?
I don't care what your theory is, or even what your benchmarking shows.
I had basically the inverse of this patch, and my customer's workload
showed significant improvement as a result. Data talks, bullshit walks.
Your patch is NAKed and will remain NAKed.
Powered by blists - more mailing lists