[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Zo1hBFS7c_J-Yx-7@casper.infradead.org>
Date: Tue, 9 Jul 2024 17:10:44 +0100
From: Matthew Wilcox <willy@...radead.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
Andrii Nakryiko <andrii.nakryiko@...il.com>,
Masami Hiramatsu <mhiramat@...nel.org>, mingo@...nel.org,
andrii@...nel.org, linux-kernel@...r.kernel.org,
rostedt@...dmis.org, oleg@...hat.com, jolsa@...nel.org,
clm@...a.com, bpf <bpf@...r.kernel.org>
Subject: Re: [PATCH 00/10] perf/uprobe: Optimize uprobes
On Tue, Jul 09, 2024 at 04:29:43PM +0200, Peter Zijlstra wrote:
> On Tue, Jul 09, 2024 at 07:11:23AM -0700, Paul E. McKenney wrote:
> > On Tue, Jul 09, 2024 at 11:01:53AM +0200, Peter Zijlstra wrote:
> > > On Mon, Jul 08, 2024 at 05:25:14PM -0700, Andrii Nakryiko wrote:
> > >
> > > > Quick profiling for the 8-threaded benchmark shows that we spend >20%
> > > > in mmap_read_lock/mmap_read_unlock in find_active_uprobe. I think
> > > > that's what would prevent uprobes from scaling linearly. If you have
> > > > some good ideas on how to get rid of that, I think it would be
> > > > extremely beneficial.
> > >
> > > That's find_vma() and friends. I started RCU-ifying that a *long* time
> > > ago when I started the speculative page fault patches. I sorta lost
> > > track of that effort, Willy where are we with that?
Probably best to start with lock_vma_under_rcu() in mm/memory.c.
> > > Specifically, how feasible would it be to get a simple RCU based
> > > find_vma() version sorted these days?
> >
> > Liam's and Willy's Maple Tree work, combined with Suren's per-VMA locking
> > combined with some of Vlastimil's slab work is pushing in that direction.
> > I believe that things are getting pretty close.
>
> So I fundamentally do not believe in per-VMA locking. Specifically for
> this case that would be trading one hot line for another. I tried
> telling people that, but it doesn't seem to stick :/
SRCU also had its own performance problems, so we've got problems one
way or the other. The per-VMA lock probably doesn't work quite the way
you think it does, but it absoutely can be a hot cacheline.
I did propose a store-free variant at LSFMM 2022 and again at 2023,
but was voted down. https://lwn.net/Articles/932298/
I don't think the door is completely closed to a migration to that,
but it's a harder sell than what we've got. Of course, data helps ...
> Per VMA refcounts or per VMA locks are a complete fail IMO.
>
> I suppose I should go dig out the latest versions of those patches to
> see where they're at :/
Merged in v6.4 ;-P
Powered by blists - more mailing lists