[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CAH0uvoi_-xSCuL9VfMNWCiqc3kir1FMBmoCG_-jDtMbOtFmY9A@mail.gmail.com>
Date: Wed, 14 May 2025 11:36:53 -0700
From: Howard Chu <howardchu95@...il.com>
To: Namhyung Kim <namhyung@...nel.org>
Cc: Arnaldo Carvalho de Melo <acme@...nel.org>, Ian Rogers <irogers@...gle.com>,
Kan Liang <kan.liang@...ux.intel.com>, Jiri Olsa <jolsa@...nel.org>,
Adrian Hunter <adrian.hunter@...el.com>, Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, LKML <linux-kernel@...r.kernel.org>,
linux-perf-users@...r.kernel.org, Song Liu <song@...nel.org>, bpf@...r.kernel.org
Subject: Re: [PATCH] perf trace: Split BPF skel code to util/trace_augment.c
Hi Namhyung,
It does not apply, probably because the cgroup patch is merged
beforehand. Can you please rebase it so others can test it? Otherwise,
this patch looks good to me.
And sorry about the delay and breaking the promise to review it within
two days...
On Tue, Apr 29, 2025 at 11:06 PM Namhyung Kim <namhyung@...nel.org> wrote:
>
> And make builtin-trace.c less conditional. Dummy functions will be
> called when BUILD_BPF_SKEL=0 is used. This makes the builtin-trace.c
> slightly smaller and simpler by removing the skeleton and its helpers.
>
> The conditional guard of trace__init_syscalls_bpf_prog_array_maps() is
> changed from the HAVE_BPF_SKEL to HAVE_LIBBPF_SUPPORT as it doesn't
> have a skeleton in the code directly. And a dummy function is added so
> that it can be called unconditionally. The function will succeed only
> if the both conditions are true.
>
> Do not include trace_augment.h from the BPF code and move the definition
> of TRACE_AUG_MAX_BUF to the BPF directly.
>
> Cc: Howard Chu <howardchu95@...il.com>
> Signed-off-by: Namhyung Kim <namhyung@...nel.org>
Reviewed-by: Howard Chu <howardchu95@...il.com>
Thanks,
Howard
Powered by blists - more mailing lists