[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240702120146.GB28838@noisy.programming.kicks-ass.net>
Date: Tue, 2 Jul 2024 14:01:46 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andrii Nakryiko <andrii@...nel.org>
Cc: linux-trace-kernel@...r.kernel.org, rostedt@...dmis.org,
mhiramat@...nel.org, oleg@...hat.com, mingo@...hat.com,
bpf@...r.kernel.org, jolsa@...nel.org, paulmck@...nel.org,
clm@...a.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 00/12] uprobes: add batched register/unregister APIs
and per-CPU RW semaphore
On Tue, Jul 02, 2024 at 01:54:47PM +0200, Peter Zijlstra wrote:
> @@ -668,12 +677,25 @@ static struct uprobe *__find_uprobe(struct inode *inode, loff_t offset)
> static struct uprobe *find_uprobe(struct inode *inode, loff_t offset)
> {
> struct uprobe *uprobe;
> + unsigned seq;
>
> + guard(rcu)();
>
> + do {
> + seq = read_seqcount_begin(&uprobes_seqcount);
> + uprobes = __find_uprobe(inode, offset);
> + if (uprobes) {
> + /*
> + * Lockless RB-tree lookups are prone to false-negatives.
> + * If they find something, it's good. If they do not find,
> + * it needs to be validated.
> + */
> + return uprobes;
> + }
> + } while (read_seqcount_retry(&uprobes_seqcount, seq));
> +
> + /* Really didn't find anything. */
> + return NULL;
> }
>
> static struct uprobe *__insert_uprobe(struct uprobe *uprobe)
> @@ -702,7 +724,9 @@ static struct uprobe *insert_uprobe(struct uprobe *uprobe)
> struct uprobe *u;
>
> write_lock(&uprobes_treelock);
> + write_seqcount_begin(&uprobes_seqcount);
> u = __insert_uprobe(uprobe);
> + write_seqcount_end(&uprobes_seqcount);
> write_unlock(&uprobes_treelock);
>
> return u;
Strictly speaking I suppose we should add rb_find_rcu() and
rc_find_add_rcu() that sprinkle some rcu_dereference_raw() and
rb_link_node_rcu() around. See the examples in __lt_find() and
__lt_insert().
Powered by blists - more mailing lists