lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 27 Mar 2024 17:06:01 +0000
From: Jonthan Haslam <jonathan.haslam@...il.com>
To: Masami Hiramatsu <mhiramat@...nel.org>
Cc: Andrii Nakryiko <andrii.nakryiko@...il.com>, 
	linux-trace-kernel@...r.kernel.org, andrii@...nel.org, bpf@...r.kernel.org, rostedt@...dmis.org, 
	Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, 
	Arnaldo Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, 
	Mark Rutland <mark.rutland@....com>, Alexander Shishkin <alexander.shishkin@...ux.intel.com>, 
	Jiri Olsa <jolsa@...nel.org>, Ian Rogers <irogers@...gle.com>, 
	Adrian Hunter <adrian.hunter@...el.com>, linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] uprobes: reduce contention on uprobes_tree access

> > Masami,
> > 
> > Given the discussion around per-cpu rw semaphore and need for
> > (internal) batched attachment API for uprobes, do you think you can
> > apply this patch as is for now? We can then gain initial improvements
> > in scalability that are also easy to backport, and Jonathan will work
> > on a more complete solution based on per-cpu RW semaphore, as
> > suggested by Ingo.
> 
> Yeah, it is interesting to use per-cpu rw semaphore on uprobe.
> I would like to wait for the next version.

My initial tests show a nice improvement on the over RW spinlocks but
significant regression in acquiring a write lock. I've got a few days
vacation over Easter but I'll aim to get some more formalised results out
to the thread toward the end of next week.

Jon.

> 
> Thank you,
> 
> > 
> > >
> > > BTW, how did you measure the overhead? I think spinlock overhead
> > > will depend on how much lock contention happens.
> > >
> > > Thank you,
> > >
> > > >
> > > > [0] https://docs.kernel.org/locking/spinlocks.html
> > > >
> > > > Signed-off-by: Jonathan Haslam <jonathan.haslam@...il.com>
> > > > ---
> > > >  kernel/events/uprobes.c | 22 +++++++++++-----------
> > > >  1 file changed, 11 insertions(+), 11 deletions(-)
> > > >
> > > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > > > index 929e98c62965..42bf9b6e8bc0 100644
> > > > --- a/kernel/events/uprobes.c
> > > > +++ b/kernel/events/uprobes.c
> > > > @@ -39,7 +39,7 @@ static struct rb_root uprobes_tree = RB_ROOT;
> > > >   */
> > > >  #define no_uprobe_events()   RB_EMPTY_ROOT(&uprobes_tree)
> > > >
> > > > -static DEFINE_SPINLOCK(uprobes_treelock);    /* serialize rbtree access */
> > > > +static DEFINE_RWLOCK(uprobes_treelock);      /* serialize rbtree access */
> > > >
> > > >  #define UPROBES_HASH_SZ      13
> > > >  /* serialize uprobe->pending_list */
> > > > @@ -669,9 +669,9 @@ static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
> > > >  {
> > > >       struct uprobe *uprobe;
> > > >
> > > > -     spin_lock(&uprobes_treelock);
> > > > +     read_lock(&uprobes_treelock);
> > > >       uprobe = __find_uprobe(inode, offset);
> > > > -     spin_unlock(&uprobes_treelock);
> > > > +     read_unlock(&uprobes_treelock);
> > > >
> > > >       return uprobe;
> > > >  }
> > > > @@ -701,9 +701,9 @@ static struct uprobe *insert_uprobe(struct uprobe *uprobe)
> > > >  {
> > > >       struct uprobe *u;
> > > >
> > > > -     spin_lock(&uprobes_treelock);
> > > > +     write_lock(&uprobes_treelock);
> > > >       u = __insert_uprobe(uprobe);
> > > > -     spin_unlock(&uprobes_treelock);
> > > > +     write_unlock(&uprobes_treelock);
> > > >
> > > >       return u;
> > > >  }
> > > > @@ -935,9 +935,9 @@ static void delete_uprobe(struct uprobe *uprobe)
> > > >       if (WARN_ON(!uprobe_is_active(uprobe)))
> > > >               return;
> > > >
> > > > -     spin_lock(&uprobes_treelock);
> > > > +     write_lock(&uprobes_treelock);
> > > >       rb_erase(&uprobe->rb_node, &uprobes_tree);
> > > > -     spin_unlock(&uprobes_treelock);
> > > > +     write_unlock(&uprobes_treelock);
> > > >       RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */
> > > >       put_uprobe(uprobe);
> > > >  }
> > > > @@ -1298,7 +1298,7 @@ static void build_probe_list(struct inode *inode,
> > > >       min = vaddr_to_offset(vma, start);
> > > >       max = min + (end - start) - 1;
> > > >
> > > > -     spin_lock(&uprobes_treelock);
> > > > +     read_lock(&uprobes_treelock);
> > > >       n = find_node_in_range(inode, min, max);
> > > >       if (n) {
> > > >               for (t = n; t; t = rb_prev(t)) {
> > > > @@ -1316,7 +1316,7 @@ static void build_probe_list(struct inode *inode,
> > > >                       get_uprobe(u);
> > > >               }
> > > >       }
> > > > -     spin_unlock(&uprobes_treelock);
> > > > +     read_unlock(&uprobes_treelock);
> > > >  }
> > > >
> > > >  /* @vma contains reference counter, not the probed instruction. */
> > > > @@ -1407,9 +1407,9 @@ vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long e
> > > >       min = vaddr_to_offset(vma, start);
> > > >       max = min + (end - start) - 1;
> > > >
> > > > -     spin_lock(&uprobes_treelock);
> > > > +     read_lock(&uprobes_treelock);
> > > >       n = find_node_in_range(inode, min, max);
> > > > -     spin_unlock(&uprobes_treelock);
> > > > +     read_unlock(&uprobes_treelock);
> > > >
> > > >       return !!n;
> > > >  }
> > > > --
> > > > 2.43.0
> > > >
> > >
> > >
> > > --
> > > Masami Hiramatsu (Google) <mhiramat@...nel.org>
> 
> 
> -- 
> Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ