[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAP-5=fXf12-Ym_iFhXeKpj5mb-QsnFCEdorp9gf=OC86c8p8WA@mail.gmail.com>
Date: Thu, 27 Apr 2023 17:32:26 -0700
From: Ian Rogers <irogers@...gle.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>,
Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>,
linux-perf-users@...r.kernel.org, bpf@...r.kernel.org,
Andrii Nakryiko <andrii@...nel.org>,
Hao Luo <haoluo@...gle.com>, Song Liu <song@...nel.org>,
Andrii Nakryiko <andrii.nakryiko@...il.com>
Subject: Re: [PATCH 2/2] perf lock contention: Rework offset calculation with
BPF CO-RE
On Thu, Apr 27, 2023 at 4:48 PM Namhyung Kim <namhyung@...nel.org> wrote:
>
> It seems BPF CO-RE reloc doesn't work well with the pattern that gets
> the field-offset only. Use offsetof() to make it explicit so that
> the compiler would generate the correct code.
>
> Fixes: 0c1228486bef ("perf lock contention: Support pre-5.14 kernels")
> Co-developed-by: Andrii Nakryiko <andrii.nakryiko@...il.com>
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
Acked-by: Ian Rogers <irogers@...gle.com>
Thanks,
Ian
> ---
> tools/perf/util/bpf_skel/lock_contention.bpf.c | 14 +++++++-------
> 1 file changed, 7 insertions(+), 7 deletions(-)
>
> diff --git a/tools/perf/util/bpf_skel/lock_contention.bpf.c b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> index 30c193078bdb..8d3cfbb3cc65 100644
> --- a/tools/perf/util/bpf_skel/lock_contention.bpf.c
> +++ b/tools/perf/util/bpf_skel/lock_contention.bpf.c
> @@ -429,21 +429,21 @@ struct rq___new {
> SEC("raw_tp/bpf_test_finish")
> int BPF_PROG(collect_lock_syms)
> {
> - __u64 lock_addr;
> + __u64 lock_addr, lock_off;
> __u32 lock_flag;
>
> + if (bpf_core_field_exists(struct rq___new, __lock))
> + lock_off = offsetof(struct rq___new, __lock);
> + else
> + lock_off = offsetof(struct rq___old, lock);
> +
> for (int i = 0; i < MAX_CPUS; i++) {
> struct rq *rq = bpf_per_cpu_ptr(&runqueues, i);
> - struct rq___new *rq_new = (void *)rq;
> - struct rq___old *rq_old = (void *)rq;
>
> if (rq == NULL)
> break;
>
> - if (bpf_core_field_exists(rq_new->__lock))
> - lock_addr = (__u64)&rq_new->__lock;
> - else
> - lock_addr = (__u64)&rq_old->lock;
> + lock_addr = (__u64)(void *)rq + lock_off;
> lock_flag = LOCK_CLASS_RQLOCK;
> bpf_map_update_elem(&lock_syms, &lock_addr, &lock_flag, BPF_ANY);
> }
> --
> 2.40.1.495.gc816e09b53d-goog
>
Powered by blists - more mailing lists