[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJuCfpG8Lq9xOce4yaNm1XzdAxVWTJYA85zjDbcpJ5MxxHr+4g@mail.gmail.com>
Date: Tue, 14 Feb 2023 08:47:26 -0800
From: Suren Baghdasaryan <surenb@...gle.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: Andrew Morton <akpm@...ux-foundation.org>, michel@...pinasse.org,
jglisse@...gle.com, mhocko@...e.com, vbabka@...e.cz,
hannes@...xchg.org, mgorman@...hsingularity.net, dave@...olabs.net,
liam.howlett@...cle.com, peterz@...radead.org,
ldufour@...ux.ibm.com, paulmck@...nel.org, mingo@...hat.com,
will@...nel.org, luto@...nel.org, songliubraving@...com,
peterx@...hat.com, david@...hat.com, dhowells@...hat.com,
hughd@...gle.com, bigeasy@...utronix.de, kent.overstreet@...ux.dev,
punit.agrawal@...edance.com, lstoakes@...il.com,
peterjung1337@...il.com, rientjes@...gle.com,
axelrasmussen@...gle.com, joelaf@...gle.com, minchan@...gle.com,
rppt@...nel.org, jannh@...gle.com, shakeelb@...gle.com,
tatashin@...gle.com, edumazet@...gle.com, gthelen@...gle.com,
gurua@...gle.com, arjunroy@...gle.com, soheil@...gle.com,
leewalsh@...gle.com, posk@...gle.com, linux-mm@...ck.org,
linux-arm-kernel@...ts.infradead.org,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH v2 00/33] Per-VMA locks
On Fri, Jan 27, 2023 at 4:00 PM Suren Baghdasaryan <surenb@...gle.com> wrote:
>
> On Fri, Jan 27, 2023 at 3:26 PM Matthew Wilcox <willy@...radead.org> wrote:
> >
> > On Fri, Jan 27, 2023 at 02:51:38PM -0800, Andrew Morton wrote:
> > > On Fri, 27 Jan 2023 11:40:37 -0800 Suren Baghdasaryan <surenb@...gle.com> wrote:
> > >
> > > > Per-vma locks idea that was discussed during SPF [1] discussion at LSF/MM
> > > > last year [2], which concluded with suggestion that “a reader/writer
> > > > semaphore could be put into the VMA itself; that would have the effect of
> > > > using the VMA as a sort of range lock. There would still be contention at
> > > > the VMA level, but it would be an improvement.” This patchset implements
> > > > this suggested approach.
> > >
> > > I think I'll await reviewer/tester input for a while.
Over the last two weeks I did not receive any feedback on the mailing
list but off-list a couple of people reported positive results in
their tests and Punit reported a regression on his NUMA machine when
running pft-threads workload. I found the source of that regression
and have two small fixes which were confirmed to improve the
performance (hopefully Punit will share the results here).
I'm planning to post v3 sometime this week. If anyone has additional
feedback, please let me know soon so that I can address it in the v3.
Thanks,
Suren.
>
> Sure, I don't expect the review to be very quick considering the
> complexity, however I would appreciate any testing that can be done.
>
> > >
> > > > The patchset implements per-VMA locking only for anonymous pages which
> > > > are not in swap and avoids userfaultfs as their implementation is more
> > > > complex. Additional support for file-back page faults, swapped and user
> > > > pages can be added incrementally.
> > >
> > > This is a significant risk. How can we be confident that these as yet
> > > unimplemented parts are implementable and that the result will be good?
> >
> > They don't need to be implementable for this patchset to be evaluated
> > on its own terms. This patchset improves scalability for anon pages
> > without making file/swap/uffd pages worse (or if it does, I haven't
> > seen the benchmarks to prove it).
>
> Making it work for all kinds of page faults would require much more
> time. So, this incremental approach, when we tackle the mmap_lock
> scalability problem part-by-part seems more doable. Even with
> anonymous-only support, the patch shows considerable improvements.
> Therefore I would argue that the patch is viable even if it does not
> support the above-mentioned cases.
>
> >
> > That said, I'm confident that I have a good handle on how to make
> > file-backed page faults work under RCU.
>
> Looking forward to collaborating on that!
> Thanks,
> Suren.
Powered by blists - more mailing lists