lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240325120323.ec3248d330b2755e73a6571e@kernel.org>
Date: Mon, 25 Mar 2024 12:03:23 +0900
From: Masami Hiramatsu (Google) <mhiramat@...nel.org>
To: Jonathan Haslam <jonathan.haslam@...il.com>
Cc: linux-trace-kernel@...r.kernel.org, andrii@...nel.org,
 bpf@...r.kernel.org, rostedt@...dmis.org, mhiramat@...nel.org, Peter
 Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...hat.com>, Arnaldo
 Carvalho de Melo <acme@...nel.org>, Namhyung Kim <namhyung@...nel.org>,
 Mark Rutland <mark.rutland@....com>, Alexander Shishkin
 <alexander.shishkin@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>, Ian
 Rogers <irogers@...gle.com>, Adrian Hunter <adrian.hunter@...el.com>,
 linux-perf-users@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] uprobes: reduce contention on uprobes_tree access

On Thu, 21 Mar 2024 07:57:35 -0700
Jonathan Haslam <jonathan.haslam@...il.com> wrote:

> Active uprobes are stored in an RB tree and accesses to this tree are
> dominated by read operations. Currently these accesses are serialized by
> a spinlock but this leads to enormous contention when large numbers of
> threads are executing active probes.
> 
> This patch converts the spinlock used to serialize access to the
> uprobes_tree RB tree into a reader-writer spinlock. This lock type
> aligns naturally with the overwhelmingly read-only nature of the tree
> usage here. Although the addition of reader-writer spinlocks are
> discouraged [0], this fix is proposed as an interim solution while an
> RCU based approach is implemented (that work is in a nascent form). This
> fix also has the benefit of being trivial, self contained and therefore
> simple to backport.
> 
> This change has been tested against production workloads that exhibit
> significant contention on the spinlock and an almost order of magnitude
> reduction for mean uprobe execution time is observed (28 -> 3.5 microsecs).

Looks good to me.

Acked-by: Masami Hiramatsu (Google) <mhiramat@...nel.org>

BTW, how did you measure the overhead? I think spinlock overhead
will depend on how much lock contention happens.

Thank you,

> 
> [0] https://docs.kernel.org/locking/spinlocks.html
> 
> Signed-off-by: Jonathan Haslam <jonathan.haslam@...il.com>
> ---
>  kernel/events/uprobes.c | 22 +++++++++++-----------
>  1 file changed, 11 insertions(+), 11 deletions(-)
> 
> diff --git a/kernel/events/uprobes.c b/kernel/events/uprobes.c
> index 929e98c62965..42bf9b6e8bc0 100644
> --- a/kernel/events/uprobes.c
> +++ b/kernel/events/uprobes.c
> @@ -39,7 +39,7 @@ static struct rb_root uprobes_tree = RB_ROOT;
>   */
>  #define no_uprobe_events()	RB_EMPTY_ROOT(&uprobes_tree)
>  
> -static DEFINE_SPINLOCK(uprobes_treelock);	/* serialize rbtree access */
> +static DEFINE_RWLOCK(uprobes_treelock);	/* serialize rbtree access */
>  
>  #define UPROBES_HASH_SZ	13
>  /* serialize uprobe->pending_list */
> @@ -669,9 +669,9 @@ static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
>  {
>  	struct uprobe *uprobe;
>  
> -	spin_lock(&uprobes_treelock);
> +	read_lock(&uprobes_treelock);
>  	uprobe = __find_uprobe(inode, offset);
> -	spin_unlock(&uprobes_treelock);
> +	read_unlock(&uprobes_treelock);
>  
>  	return uprobe;
>  }
> @@ -701,9 +701,9 @@ static struct uprobe *insert_uprobe(struct uprobe *uprobe)
>  {
>  	struct uprobe *u;
>  
> -	spin_lock(&uprobes_treelock);
> +	write_lock(&uprobes_treelock);
>  	u = __insert_uprobe(uprobe);
> -	spin_unlock(&uprobes_treelock);
> +	write_unlock(&uprobes_treelock);
>  
>  	return u;
>  }
> @@ -935,9 +935,9 @@ static void delete_uprobe(struct uprobe *uprobe)
>  	if (WARN_ON(!uprobe_is_active(uprobe)))
>  		return;
>  
> -	spin_lock(&uprobes_treelock);
> +	write_lock(&uprobes_treelock);
>  	rb_erase(&uprobe->rb_node, &uprobes_tree);
> -	spin_unlock(&uprobes_treelock);
> +	write_unlock(&uprobes_treelock);
>  	RB_CLEAR_NODE(&uprobe->rb_node); /* for uprobe_is_active() */
>  	put_uprobe(uprobe);
>  }
> @@ -1298,7 +1298,7 @@ static void build_probe_list(struct inode *inode,
>  	min = vaddr_to_offset(vma, start);
>  	max = min + (end - start) - 1;
>  
> -	spin_lock(&uprobes_treelock);
> +	read_lock(&uprobes_treelock);
>  	n = find_node_in_range(inode, min, max);
>  	if (n) {
>  		for (t = n; t; t = rb_prev(t)) {
> @@ -1316,7 +1316,7 @@ static void build_probe_list(struct inode *inode,
>  			get_uprobe(u);
>  		}
>  	}
> -	spin_unlock(&uprobes_treelock);
> +	read_unlock(&uprobes_treelock);
>  }
>  
>  /* @vma contains reference counter, not the probed instruction. */
> @@ -1407,9 +1407,9 @@ vma_has_uprobes(struct vm_area_struct *vma, unsigned long start, unsigned long e
>  	min = vaddr_to_offset(vma, start);
>  	max = min + (end - start) - 1;
>  
> -	spin_lock(&uprobes_treelock);
> +	read_lock(&uprobes_treelock);
>  	n = find_node_in_range(inode, min, max);
> -	spin_unlock(&uprobes_treelock);
> +	read_unlock(&uprobes_treelock);
>  
>  	return !!n;
>  }
> -- 
> 2.43.0
> 


-- 
Masami Hiramatsu (Google) <mhiramat@...nel.org>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ