lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZhcstFcjOuOmr0wx@x1n>
Date: Wed, 10 Apr 2024 20:20:04 -0400
From: Peter Xu <peterx@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Suren Baghdasaryan <surenb@...gle.com>,
	Lokesh Gidra <lokeshgidra@...gle.com>,
	"Liam R . Howlett" <Liam.Howlett@...cle.com>,
	Alistair Popple <apopple@...dia.com>
Subject: Re: [PATCH] mm: Always sanity check anon_vma first for per-vma locks

On Thu, Apr 11, 2024 at 12:59:09AM +0100, Matthew Wilcox wrote:
> On Wed, Apr 10, 2024 at 05:23:18PM -0400, Peter Xu wrote:
> > On Wed, Apr 10, 2024 at 10:10:45PM +0100, Matthew Wilcox wrote:
> > > > I can do some tests later today or tomorrow. Any suggestion you have on
> > > > amplifying such effect that you have concern with?
> > > 
> > > 8 socket NUMA system, 800MB text segment, 10,000 threads.  No, I'm not
> > > joking, that's a real customer workload.
> > 
> > Well, I believe you, but even with this, that's a total of 800MB memory on
> > a giant moster system... probably just to fault in once.
> > 
> > And even before we talk about that into details.. we're talking about such
> > giant program running acorss hundreds of cores with hundreds of MB text,
> > then... hasn't the program developer already considered mlockall() at the
> > entry of the program?  Wouldn't that greatly beneficial already with
> > whatever granule of locks that a future fault would take?
> 
> I don't care what your theory is, or even what your benchmarking shows.
> I had basically the inverse of this patch, and my customer's workload
> showed significant improvement as a result.  Data talks, bullshit walks.
> Your patch is NAKed and will remain NAKed.

Either would you tell me your workload, I may try it.

Or, please explain why it helps?  If such huge library is in a single VMA,
I don't see why per-vma lock is better than mmap lock.  If the text is
combined with multiple vmas, it should only help when each core faults at
least on different vmas, not the same.

Would you go either way, please?

For now my patch got strongly NACKed without yet a real proof.  If that
really matters, I am happy to learn, and I agree this patch shouldn't go in
if that's provided.  Otherwise I am not convinced.  If you think data
talks, I'm happy to try any workload that I am in access with, then we
compare data.

-- 
Peter Xu


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ