lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <e7e0ef26-2335-4e67-984c-705cb33ff4c3@linux.intel.com>
Date: Thu, 5 Sep 2024 16:29:19 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Andrii Nakryiko <andrii.nakryiko@...il.com>
Cc: Andrii Nakryiko <andrii@...nel.org>, linux-perf-users@...r.kernel.org,
 peterz@...radead.org, x86@...nel.org, mingo@...hat.com,
 linux-kernel@...r.kernel.org, bpf@...r.kernel.org, acme@...nel.org,
 kernel-team@...a.com, stable@...r.kernel.org
Subject: Re: [PATCH] perf/x86: fix wrong assumption that LBR is only useful
 for sampling events



On 2024-09-05 4:22 p.m., Andrii Nakryiko wrote:
> On Thu, Sep 5, 2024 at 12:21 PM Liang, Kan <kan.liang@...ux.intel.com> wrote:
>>
>>
>>
>> On 2024-09-05 2:00 p.m., Andrii Nakryiko wrote:
>>> It's incorrect to assume that LBR can/should only be used with sampling
>>> events. BPF subsystem provides bpf_get_branch_snapshot() BPF helper,
>>> which expects a properly setup and activated perf event which allows
>>> kernel to capture LBR data.
>>>
>>> For instance, retsnoop tool ([0]) makes an extensive use of this
>>> functionality and sets up perf event as follows:
>>>
>>>       struct perf_event_attr attr;
>>>
>>>       memset(&attr, 0, sizeof(attr));
>>>       attr.size = sizeof(attr);
>>>       attr.type = PERF_TYPE_HARDWARE;
>>>       attr.config = PERF_COUNT_HW_CPU_CYCLES;
>>>       attr.sample_type = PERF_SAMPLE_BRANCH_STACK;
>>>       attr.branch_sample_type = PERF_SAMPLE_BRANCH_KERNEL;
>>>
>>> Commit referenced in Fixes tag broke this setup by making invalid assumption
>>> that LBR is useful only for sampling events. Remove that assumption.
>>>
>>> Note, earlier we removed a similar assumption on AMD side of LBR support,
>>> see [1] for details.
>>>
>>>   [0] https://github.com/anakryiko/retsnoop
>>>   [1] 9794563d4d05 ("perf/x86/amd: Don't reject non-sampling events with configured LBR")
>>>
>>> Cc: stable@...r.kernel.org # 6.8+
>>> Fixes: 85846b27072d ("perf/x86: Add PERF_X86_EVENT_NEEDS_BRANCH_STACK flag")
>>> Signed-off-by: Andrii Nakryiko <andrii@...nel.org>
>>> ---
>>>  arch/x86/events/intel/core.c | 2 +-
>>>  1 file changed, 1 insertion(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>>> index 9e519d8a810a..f82a342b8852 100644
>>> --- a/arch/x86/events/intel/core.c
>>> +++ b/arch/x86/events/intel/core.c
>>> @@ -3972,7 +3972,7 @@ static int intel_pmu_hw_config(struct perf_event *event)
>>>                       x86_pmu.pebs_aliases(event);
>>>       }
>>>
>>> -     if (needs_branch_stack(event) && is_sampling_event(event))
>>> +     if (needs_branch_stack(event))
>>>               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>>
>> To limit the LBR for a sampling event is to avoid unnecessary branch
>> stack setup for a counting event in the sample read. The above change
>> should break the sample read case.
>>
>> How about the below patch (not test)? Is it good enough for the BPF usage?
>>
>> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
>> index 0c9c2706d4ec..8d67cbda916b 100644
>> --- a/arch/x86/events/intel/core.c
>> +++ b/arch/x86/events/intel/core.c
>> @@ -3972,8 +3972,12 @@ static int intel_pmu_hw_config(struct perf_event
>> *event)
>>                 x86_pmu.pebs_aliases(event);
>>         }
>>
>> -       if (needs_branch_stack(event) && is_sampling_event(event))
>> -               event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> +       if (needs_branch_stack(event)) {
>> +               /* Avoid branch stack setup for counting events in SAMPLE READ */
>> +               if (is_sampling_event(event) ||
>> +                   !(event->attr.sample_type & PERF_SAMPLE_READ))
>> +                       event->hw.flags  |= PERF_X86_EVENT_NEEDS_BRANCH_STACK;
>> +       }
>>
> 
> I'm sure it will be fine for my use case, as I set only
> PERF_SAMPLE_BRANCH_STACK.
> 
> But I'll leave it up to perf subsystem experts to decide if this
> condition makes sense, because looking at what PERF_SAMPLE_READ is:
> 
>           PERF_SAMPLE_READ
>                  Record counter values for all events in a group,
>                  not just the group leader.
> 
> It's not clear why this would disable LBR, if specified.

It only disables the counting event with SAMPLE_READ, since LBR is only
read in the sampling event's overflow.

Thanks,
Kan
> 
>>         if (branch_sample_counters(event)) {
>>                 struct perf_event *leader, *sibling;
>>
>>
>> Thanks,
>> Kan
>>>
>>>       if (branch_sample_counters(event)) {

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ