[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20191206113732.GF2844@hirez.programming.kicks-ass.net>
Date: Fri, 6 Dec 2019 12:37:32 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Daniel Xu <dxu@...uu.xyz>
Cc: ast@...nel.org, daniel@...earbox.net, yhs@...com, kafai@...com,
songliubraving@...com, andriin@...com, netdev@...r.kernel.org,
bpf@...r.kernel.org, mingo@...hat.com, acme@...nel.org,
linux-kernel@...r.kernel.org, kernel-team@...com
Subject: Re: [PATCH bpf] bpf: Add LBR data to BPF_PROG_TYPE_PERF_EVENT prog
context
On Thu, Dec 05, 2019 at 04:12:26PM -0800, Daniel Xu wrote:
> Last-branch-record is an intel CPU feature that can be configured to
> record certain branches that are taken during code execution. This data
> is particularly interesting for profile guided optimizations. perf has
> had LBR support for a while but the data collection can be a bit coarse
> grained.
>
> We (Facebook) have recently run a lot of experiments with feeding
> filtered LBR data to various PGO pipelines. We've seen really good
> results (+2.5% throughput with lower cpu util and lower latency) by
> feeding high request latency LBR branches to the compiler on a
> request-oriented service. We used bpf to read a special request context
> ID (which is how we associate branches with latency) from a fixed
> userspace address. Reading from the fixed address is why bpf support is
> useful.
>
> Aside from this particular use case, having LBR data available to bpf
> progs can be useful to get stack traces out of userspace applications
> that omit frame pointers.
>
> This patch adds support for LBR data to bpf perf progs.
>
> Some notes:
> * We use `__u64 entries[BPF_MAX_LBR_ENTRIES * 3]` instead of
> `struct perf_branch_entry[BPF_MAX_LBR_ENTRIES]` because checkpatch.pl
> warns about including a uapi header from another uapi header
>
> * We define BPF_MAX_LBR_ENTRIES as 32 (instead of using the value from
> arch/x86/events/perf_events.h) because including arch specific headers
> seems wrong and could introduce circular header includes.
>
> Signed-off-by: Daniel Xu <dxu@...uu.xyz>
> ---
> include/uapi/linux/bpf_perf_event.h | 5 ++++
> kernel/trace/bpf_trace.c | 39 +++++++++++++++++++++++++++++
> 2 files changed, 44 insertions(+)
>
> diff --git a/include/uapi/linux/bpf_perf_event.h b/include/uapi/linux/bpf_perf_event.h
> index eb1b9d21250c..dc87e3d50390 100644
> --- a/include/uapi/linux/bpf_perf_event.h
> +++ b/include/uapi/linux/bpf_perf_event.h
> @@ -10,10 +10,15 @@
>
> #include <asm/bpf_perf_event.h>
>
> +#define BPF_MAX_LBR_ENTRIES 32
> +
> struct bpf_perf_event_data {
> bpf_user_pt_regs_t regs;
> __u64 sample_period;
> __u64 addr;
> + __u64 nr_lbr;
> + /* Cast to struct perf_branch_entry* before using */
> + __u64 entries[BPF_MAX_LBR_ENTRIES * 3];
> };
Note how perf has avoided actually using the LBR name and size in its
ABI. There's other architectures that can do branch stacks (PowerPC) and
given historic precedent, the current size (32) might once again change
(we started at 4 with Intel Core IIRC).
Powered by blists - more mailing lists