[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20210824060157.3889139-1-songliubraving@fb.com>
Date: Mon, 23 Aug 2021 23:01:54 -0700
From: Song Liu <songliubraving@...com>
To: <bpf@...r.kernel.org>, <linux-kernel@...r.kernel.org>
CC: <acme@...nel.org>, <peterz@...radead.org>, <mingo@...hat.com>,
<kernel-team@...com>, Song Liu <songliubraving@...com>
Subject: [PATCH bpf-next 0/3] bpf: introduce bpf_get_branch_trace
Branch stack can be very useful in understanding software events. For
example, when a long function, e.g. sys_perf_event_open, returns an errno,
it is not obvious why the function failed. Branch stack could provide very
helpful information in this type of scenarios.
This set adds support to read branch stack with a new BPF helper
bpf_get_branch_trace(). Currently, this is only supported in Intel systems.
It is also possible to support the same feaure for PowerPC.
The hardware that records the branch stace is not stopped automatically on
software events. Therefore, it is necessary to stop it in software soon.
Otherwise, the hardware buffers/registers will be flushed. One of the key
design consideration in this set is to minimize the number of branch record
entries between the event triggers and the hardware recorder is stopped.
Based on this goal, current design is different from the discussions in
original RFC [1]:
1) Static call is used when supported, to save function pointer
dereference;
2) intel_pmu_lbr_disable_all is used instead of perf_pmu_disable(),
because the latter uses about 10 entries before stopping LBR.
With current code, on Intel CPU, LBR is stopped after 6 branch entries
after fexit triggers:
ID: 0 from intel_pmu_lbr_disable_all.part.10+37 to intel_pmu_lbr_disable_all.part.10+72
ID: 1 from intel_pmu_lbr_disable_all.part.10+33 to intel_pmu_lbr_disable_all.part.10+37
ID: 2 from intel_pmu_snapshot_branch_stack+46 to intel_pmu_lbr_disable_all.part.10+0
ID: 3 from __bpf_prog_enter+38 to intel_pmu_snapshot_branch_stack+0
ID: 4 from __bpf_prog_enter+8 to __bpf_prog_enter+38
ID: 5 from __brk_limit+477020214 to __bpf_prog_enter+0
ID: 6 from bpf_fexit_loop_test1+22 to __brk_limit+477020195
ID: 7 from bpf_fexit_loop_test1+20 to bpf_fexit_loop_test1+13
ID: 8 from bpf_fexit_loop_test1+20 to bpf_fexit_loop_test1+13
...
[1] https://lore.kernel.org/bpf/20210818012937.2522409-1-songliubraving@fb.com/
Song Liu (3):
perf: enable branch record for software events
bpf: introduce helper bpf_get_branch_trace
selftests/bpf: add test for bpf_get_branch_trace
arch/x86/events/intel/core.c | 5 +-
arch/x86/events/intel/lbr.c | 12 ++
arch/x86/events/perf_event.h | 2 +
include/linux/filter.h | 3 +-
include/linux/perf_event.h | 33 ++++++
include/uapi/linux/bpf.h | 16 +++
kernel/bpf/trampoline.c | 15 +++
kernel/bpf/verifier.c | 7 ++
kernel/events/core.c | 28 +++++
kernel/trace/bpf_trace.c | 30 +++++
net/bpf/test_run.c | 15 ++-
tools/include/uapi/linux/bpf.h | 16 +++
.../bpf/prog_tests/get_branch_trace.c | 106 ++++++++++++++++++
.../selftests/bpf/progs/get_branch_trace.c | 41 +++++++
tools/testing/selftests/bpf/trace_helpers.c | 30 +++++
tools/testing/selftests/bpf/trace_helpers.h | 5 +
16 files changed, 361 insertions(+), 3 deletions(-)
create mode 100644 tools/testing/selftests/bpf/prog_tests/get_branch_trace.c
create mode 100644 tools/testing/selftests/bpf/progs/get_branch_trace.c
--
2.30.2
Powered by blists - more mailing lists