[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <175095968570.2045399.17117196657041897009.b4-ty@kernel.org>
Date: Thu, 26 Jun 2025 10:41:25 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Arnaldo Carvalho de Melo <acme@...nel.org>,
Ian Rogers <irogers@...gle.com>, Kan Liang <kan.liang@...ux.intel.com>,
Namhyung Kim <namhyung@...nel.org>
Cc: Jiri Olsa <jolsa@...nel.org>, Adrian Hunter <adrian.hunter@...el.com>,
Peter Zijlstra <peterz@...radead.org>, Ingo Molnar <mingo@...nel.org>,
LKML <linux-kernel@...r.kernel.org>, linux-perf-users@...r.kernel.org,
Song Liu <song@...nel.org>, bpf@...r.kernel.org,
Howard Chu <howardchu95@...il.com>
Subject: Re: [PATCH v2] perf trace: Split BPF skel code to
util/bpf_trace_augment.c
On Mon, 23 Jun 2025 15:57:21 -0700, Namhyung Kim wrote:
> And make builtin-trace.c less conditional. Dummy functions will be
> called when BUILD_BPF_SKEL=0 is used. This makes the builtin-trace.c
> slightly smaller and simpler by removing the skeleton and its helpers.
>
> The conditional guard of trace__init_syscalls_bpf_prog_array_maps() is
> changed from the HAVE_BPF_SKEL to HAVE_LIBBPF_SUPPORT as it doesn't
> have a skeleton in the code directly. And a dummy function is added so
> that it can be called unconditionally. The function will succeed only
> if the both conditions are true.
>
> [...]
Applied to perf-tools-next, thanks!
Best regards,
Namhyung
Powered by blists - more mailing lists