lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <7k2gs6xmx2q7la6kle5xpn2p2f6bccbiv5lrdowp5hnecxpijx@rzwxdhcl6mc2>
Date: Fri, 31 Jan 2025 12:47:24 -0500
From: "Liam R. Howlett" <Liam.Howlett@...cle.com>
To: Davidlohr Bueso <dave@...olabs.net>
Cc: Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
        SeongJae Park <sj@...nel.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        David Hildenbrand <david@...hat.com>,
        Shakeel Butt <shakeel.butt@...ux.dev>,
        Vlastimil Babka <vbabka@...e.cz>, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org
Subject: Re: [RFC PATCH v2 4/4] mm/madvise: remove redundant mmap_lock
 operations from process_madvise()

* Davidlohr Bueso <dave@...olabs.net> [250131 12:31]:
> On Fri, 31 Jan 2025, Lorenzo Stoakes wrote:
> 
> > On Thu, Jan 16, 2025 at 05:30:58PM -0800, SeongJae Park wrote:
> > > Optimize redundant mmap lock operations from process_madvise() by
> > > directly doing the mmap locking first, and then the remaining works for
> > > all ranges in the loop.
> > > 
> > > Signed-off-by: SeongJae Park <sj@...nel.org>
> > 
> > I wonder if this might increase lock contention because now all of the
> > vector operations will hold the relevant mm lock without releasing after
> > each operation?
> 
> That was exactly my concern. While afaict the numbers presented in v1
> are quite nice, this is ultimately a micro-benchmark, where no other
> unrelated threads are impacted by these new hold times.

Indeed, I was also concerned about this scenario.

But this method does have the added advantage of keeping the vma space
in the same state as it was expected during the initial call - although
the race does still exist on looking vs acting on the data.  This would
just remove the intermediate changes.

> 
> > Probably it's ok given limited size of iov, but maybe in future we'd want
> > to set a limit on the ranges before we drop/reacquire lock?
> 
> imo, this should best be done in the same patch/series. Maybe extend
> the benchmark to use IOV_MAX and find a sweet spot?

Are you worried this is over-engineering for a problem that may never be
an issue, or is there a particular usecase you have in mind?

It is probably worth investigating, and maybe a potential usecase would
help with the targeted sweet spot?

Thanks,
Liam


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ