[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20190118084132.GB10855@hirez.programming.kicks-ass.net>
Date: Fri, 18 Jan 2019 09:41:32 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Song Liu <songliubraving@...com>
Cc: linux-kernel@...r.kernel.org, netdev@...r.kernel.org,
acme@...nel.org, ast@...nel.org, daniel@...earbox.net,
kernel-team@...com, dsahern@...il.com,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH v10 perf, bpf-next 1/9] perf, bpf: Introduce
PERF_RECORD_KSYMBOL
On Thu, Jan 17, 2019 at 01:56:53PM +0100, Peter Zijlstra wrote:
> +static __always_inline struct latch_tree_node *
> +latch_tree_first(struct latch_tree_root *root)
> +{
> + struct latch_tree_node *ltn = NULL;
> + struct rb_node *node;
> + unsigned int seq;
> +
> + do {
> + struct rb_root *rbr;
> +
> + seq = raw_read_seqcount_latch(&root->seq);
> + rbr = &root->tree[seq & 1];
> + node = rb_first(rbr);
> + } while (read_seqcount_retry(&root->seq, seq));
> +
> + if (node)
> + ltn = __lt_from_rb(node, seq & 1);
> +
> + return ltn;
> +}
> +
> +/**
> + * latch_tree_next() - find the next @ltn in @root per sort order
> + * @root: trees to which @ltn belongs
> + * @ltn: nodes to start from
> + *
> + * Does a lockless lookup in the trees @root for the next node starting at
> + * @ltn.
> + *
> + * Using this function outside of the write side lock is rather dodgy but given
> + * latch_tree_erase() doesn't re-init the nodes and the whole iteration is done
> + * under a single RCU critical section, it should be non-fatal and generate some
> + * semblance of order - albeit possibly missing chunks of the tree.
> + */
> +static __always_inline struct latch_tree_node *
> +latch_tree_next(struct latch_tree_root *root, struct latch_tree_node *ltn)
> +{
> + struct rb_node *node;
> + unsigned int seq;
> +
> + do {
> + seq = raw_read_seqcount_latch(&root->seq);
> + node = rb_next(<n->node[seq & 1]);
> + } while (read_seqcount_retry(&root->seq, seq));
> +
> + return __lt_from_rb(node, seq & 1);
> +}
> +static int kallsym_tree_kallsym(unsigned int symnum, unsigned long *value, char *type,
> + char *sym, char *modname, int *exported)
> +{
> + struct latch_tree_node *ltn;
> + int i, ret = -ERANGE;
> +
> + rcu_read_lock();
> + for (i = 0, ltn = latch_tree_first(&kallsym_tree); i < symnum && ltn;
> + i++, ltn = latch_tree_next(&kallsym_tree, ltn))
> + ;
On second thought; I don't think this will be good enough after all.
Missing entire subtrees is too much.
The rcu-list iteration will only miss newly added symbols, and for those
we'll get the events, combined we'll still have a complete picture. Not
so when a whole subtree goes missing.
I thought I could avoid the list this way, but alas, not so.
> +
> + if (ltn) {
> + struct kallsym_node *kn;
> + char *mod;
> +
> + kn = container_of(ltn, struct kallsym_node, kn_node);
> +
> + kn->kn_names(kn, sym, &mod);
> + if (mod)
> + strlcpy(modname, mod, MODULE_NAME_LEN);
> + else
> + modname[0] = '\0';
> +
> + *type = 't';
> + *exported = 0;
> + ret = 0;
> + }
> + rcu_read_unlock();
> +
> + return ret;
> +}
Powered by blists - more mailing lists