lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Wed, 27 Mar 2024 08:42:58 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Jonthan Haslam <jonathan.haslam@...il.com>
Cc: linux-trace-kernel@...r.kernel.org, andrii@...nel.org,
 bpf@...r.kernel.org, rostedt@...dmis.org, Peter Zijlstra
 <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Arnaldo Carvalho de
 Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>, Mark Rutland
 <mark.rutland@....com>, Alexander Shishkin
 <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, Ian
 Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
 linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] uprobes: reduce contention on uprobes_tree access

On Mon, 25 Mar 2024 19:04:59 +0000
Jonthan Haslam <jonathan.haslam@...il.com> wrote:

> Hi Masami,
> 
> > > This change has been tested against production workloads that exhibit
> > > significant contention on the spinlock and an almost order of magnitude
> > > reduction for mean uprobe execution time is observed (28 -> 3.5 microsecs).
> > 
> > Looks good to me.
> > 
> > Acked-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>
> > 
> > BTW, how did you measure the overhead? I think spinlock overhead
> > will depend on how much lock contention happens.
> 
> Absolutely. I have the original production workload to test this with and
> a derived one that mimics this test case. The production case has ~24
> threads running on a 192 core system which access 14 USDTs around 1.5
> million times per second in total (across all USDTs). My test case is
> similar but can drive a higher rate of USDT access across more threads and
> therefore generate higher contention.

Thanks for the info. So this result is measured in enough large machine
with high parallelism. So lock contention is matter.
Can you also include this information with the number in next version?

Thank you,

> 
> All measurements are done using bpftrace scripts around relevant parts of
> code in uprobes.c and application code.
> 
> Jon.
> 
> > 
> > Thank you,
> > 
> > > 
> > > [0] https://docs.kernel.org/locking/spinlocks.html
> > > 
> > > Signed-off-by: Jonathan Haslam <jonathan.haslam@...il.com>
> > > ---
> > >  kernel/events/uprobes.c | 22 +++++++++++-----------
> > >  1 file changed, 11 insertions(+), 11 deletions(-)
> > > 
> > > diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> > > index 929e98c62965..42bf9b6e8bc0 100644
> > > --- a/kernel/events/uprobes.c
> > > +++ b/kernel/events/uprobes.c
> > > @@ -39,7 +39,7 @@ static struct rb_root uprobes_tree = RB_ROOT;
> > >   */
> > >  #define no_uprobe_events()	RB_EMPTY_ROOT(&uprobes_tree)
> > >  
> > > -static DEFINE_SPINLOCK(uprobes_treelock);	/* serialize rbtree access */
> > > +static DEFINE_RWLOCK(uprobes_treelock);	/* serialize rbtree access */
> > >  
> > >  #define UPROBES_HASH_SZ	13
> > >  /* serialize uprobe->pending_list */
> > > @@ -669,9 +669,9 @@ static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
> > >  {
> > >  	struct uprobe *uprobe;
> > >  
> > > -	spin_lock(&uprobes_treelock);
> > > +	read_lock(&uprobes_treelock);
> > >  	uprobe = __find_uprobe(inode, offset);
> > > -	spin_unlock(&uprobes_treelock);
> > > +	read_unlock(&uprobes_treelock);
> > >  
> > >  	return uprobe;
> > >  }
> > > @@ -701,9 +701,9 @@ static struct uprobe *insert_uprobe(struct uprobe *uprobe)
> > >  {
> > >  	struct uprobe *u;
> > >  
> > > -	spin_lock(&uprobes_treelock);
> > > +	write_lock(&uprobes_treelock);
> > >  	u = __insert_uprobe(uprobe);
> > > -	spin_unlock(&uprobes_treelock);
> > > +	write_unlock(&uprobes_treelock);
> > >  
> > >  	return u;
> > >  }
> > > @@ -935,9 +935,9 @@ static void delete_uprobe(struct uprobe *uprobe)
> > >  	if (WARN_ON(!uprobe_is_active(uprobe)))
> > >  		return;
> > >  
> > > -	spin_lock(&uprobes_treelock);
> > > +	write_lock(&uprobes_treelock);
> > >  	rb_erase(&uprobe->rb_node, &uprobes_tree);
> > > -	spin_unlock(&uprobes_treelock);
> > > +	write_unlock(&uprobes_treelock);
> > >  	RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */
> > >  	put_uprobe(uprobe);
> > >  }
> > > @@ -1298,7 +1298,7 @@ static void build_probe_list(struct inode *inode,
> > >  	min = vaddr_to_offset(vma, start);
> > >  	max = min + (end - start) - 1;
> > >  
> > > -	spin_lock(&uprobes_treelock);
> > > +	read_lock(&uprobes_treelock);
> > >  	n = find_node_in_range(inode, min, max);
> > >  	if (n) {
> > >  		for (t = n; t; t = rb_prev(t)) {
> > > @@ -1316,7 +1316,7 @@ static void build_probe_list(struct inode *inode,
> > >  			get_uprobe(u);
> > >  		}
> > >  	}
> > > -	spin_unlock(&uprobes_treelock);
> > > +	read_unlock(&uprobes_treelock);
> > >  }
> > >  
> > >  /* @vma contains reference counter, not the probed instruction. */
> > > @@ -1407,9 +1407,9 @@ vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long e
> > >  	min = vaddr_to_offset(vma, start);
> > >  	max = min + (end - start) - 1;
> > >  
> > > -	spin_lock(&uprobes_treelock);
> > > +	read_lock(&uprobes_treelock);
> > >  	n = find_node_in_range(inode, min, max);
> > > -	spin_unlock(&uprobes_treelock);
> > > +	read_unlock(&uprobes_treelock);
> > >  
> > >  	return !!n;
> > >  }
> > > -- 
> > > 2.43.0
> > > 
> > 
> > 
> > -- 
> > Masami Hiramatsu (Google) <mhiramat@...nel.org>


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ