[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAADnVQLm-jA5-39-LUKybO2oGbDRr2RgPtJH5iXoeKnYqdJUuw@mail.gmail.com>
Date: Fri, 20 Dec 2024 15:52:36 -0800
From: Alexei Starovoitov <alexei.starovoitov@...il.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>, Ian Rogers <irogers@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
"linux-perf-use." <linux-perf-users@...r.kernel.org>, Andrii Nakryiko <andrii@...nel.org>,
Song Liu <song@...nel.org>, bpf <bpf@...r.kernel.org>,
Stephane Eranian <eranian@...gle.com>, Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>,
Kees Cook <kees@...nel.org>, Chun-Tse Shao <ctshao@...gle.com>
Subject: Re: [PATCH v3 2/4] perf lock contention: Run BPF slab cache iterator
On Thu, Dec 19, 2024 at 10:01 PM Namhyung Kim <namhyung@...nel.org> wrote:
> +struct bpf_iter__kmem_cache___new {
> + struct kmem_cache *s;
> +} __attribute__((preserve_access_index));
> +
> +SEC("iter/kmem_cache")
> +int slab_cache_iter(void *ctx)
> +{
> + struct kmem_cache *s = NULL;
> + struct slab_cache_data d;
> + const char *nameptr;
> +
> + if (bpf_core_type_exists(struct bpf_iter__kmem_cache)) {
> + struct bpf_iter__kmem_cache___new *iter = ctx;
> +
> + s = BPF_CORE_READ(iter, s);
> + }
> +
> + if (s == NULL)
> + return 0;
> +
> + nameptr = BPF_CORE_READ(s, name);
since the feature depends on the latest kernel please use
direct access. There is no need to use BPF_CORE_READ() to
be compatible with old kernels.
Just iter->s and s->name will work and will be much faster.
Underneath these loads will be marked with PROBE_MEM flag and
will be equivalent to probe_read_kernel calls, but faster
since the whole thing will be inlined by JITs.
Powered by blists - more mailing lists