[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAEf4Bza9H=nH4+=dDNm55X5LZp4MVSkKyBcnuNq3+8cP6qt=uQ@mail.gmail.com>
Date: Thu, 5 Sep 2024 13:22:37 -0700
From: Andrii Nakryiko <andrii.nakryiko@...il.com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: Andrii Nakryiko <andrii@...nel.org>, linux-perf-users@...r.kernel.org,
peterz@...radead.org, x86@...nel.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, bpf@...r.kernel.org, acme@...nel.org,
kernel-team@...a.com, stable@...r.kernel.org
Subject: Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful
for sampling events
On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@...ux.intel.com> wrote:
>
>
>
> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
> > It's incorrect to assume that LBR can/should only be used with sampling
> > events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
> > which expects a properly setup and activated perf event which allows
> > kernel to capture LBR data.
> >
> > For instance, retsnoop tool ([0]) makes an extensive use of this
> > functionality and sets up perf event as follows:
> >
> > struct perf_event_attr attr;
> >
> > memset(&attr, 0, sizeof(attr));
> > attr.size = sizeof(attr);
> > attr.type = PERF_TYPE_HARDWARE;
> > attr.config = PERF_COUNT_HW_CPU_CYCLES;
> > attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
> > attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
> >
> > Commit referenced in Fixes tag broke this setup by making invalid assumption
> > that LBR is useful only for sampling events. Remove that assumption.
> >
> > Note, earlier we removed a similar assumption on AMD side of LBR support,
> > see [1] for details.
> >
> > [0] https://github.com/anakryiko/retsnoop
> > [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
> >
> > Cc: stable@...r.kernel.org # 6.8+
> > Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
> > Signed-off-by: Andrii Nakryiko <andrii@...nel.org>
> > ---
> > arch/x86/events/intel/core.c | 2 +-
> > 1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> > index 9e519d8a810a..f82a342b8852 100644
> > --- a/arch/x86/events/intel/core.c
> > +++ b/arch/x86/events/intel/core.c
> > @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
> > x86_pmu.pebs_aliases(event);
> > }
> >
> > - if (needs_branch_stack(event) && is_sampling_event(event))
> > + if (needs_branch_stack(event))
> > event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>
> To limit the LBR for a sampling event is to avoid unnecessary branch
> stack setup for a counting event in the sample read. The above change
> should break the sample read case.
>
> How about the below patch (not test)? Is it good enough for the BPF usage?
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index 0c9c2706d4ec..8d67cbda916b 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
> *event)
> x86_pmu.pebs_aliases(event);
> }
>
> - if (needs_branch_stack(event) && is_sampling_event(event))
> - event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> + if (needs_branch_stack(event)) {
> + /* Avoid branch stack setup for counting events in SAMPLE READ */
> + if (is_sampling_event(event) ||
> + !(event->attr.sample_type & PERF_SAMPLE_READ))
> + event->hw.flags |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
> + }
>
I'm sure it will be fine for my use case, as I set only
PERF_SAMPLE_BRANCH_STACK.
But I'll leave it up to perf subsystem experts to decide if this
condition makes sense, because looking at what PERF_SAMPLE_READ is:
PERF_SAMPLE_READ
Record counter values for all events in a group,
not just the group leader.
It's not clear why this would disable LBR, if specified.
> if (branch_sample_counters(event)) {
> struct perf_event *leader, *sibling;
>
>
> Thanks,
> Kan
> >
> > if (branch_sample_counters(event)) {
Powered by blists - more mailing lists