lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <3b088d08-2c01-4290-8497-2855935bf8af@linux.intel.com>
Date: Wed, 7 May 2025 15:00:13 -0400
From: "Liang, Kan" <kan.liang@...ux.intel.com>
To: Ian Rogers <irogers@...gle.com>
Cc: peterz@...radead.org, mingo@...hat.com, namhyung@...nel.org,
 mark.rutland@....com, linux-kernel@...r.kernel.org,
 linux-perf-users@...r.kernel.org, eranian@...gle.com, ctshao@...gle.com,
 tmricht@...ux.ibm.com
Subject: Re: [RFC PATCH 01/15] perf: Fix the throttle logic for a group



On 2025-05-07 12:52 p.m., Ian Rogers wrote:
> On Tue, May 6, 2025 at 9:48 AM <kan.liang@...ux.intel.com> wrote:
>>
>> From: Kan Liang <kan.liang@...ux.intel.com>
>>
>> The current throttle logic doesn't work well with a group, e.g., the
>> following sampling-read case.
>>
>> $ perf record -e "{cycles,cycles}:S" ...
>>
>> $ perf report -D | grep THROTTLE | tail -2
>>             THROTTLE events:        426  ( 9.0%)
>>           UNTHROTTLE events:        425  ( 9.0%)
>>
>> $ perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
>> 0 1020120874009167 0x74970 [0x68]: PERF_RECORD_SAMPLE(IP, 0x1):
>> ... sample_read:
>> .... group nr 2
>> ..... id 0000000000000327, value 000000000cbb993a, lost 0
>> ..... id 0000000000000328, value 00000002211c26df, lost 0
>>
>> The second cycles event has a much larger value than the first cycles
>> event in the same group.
>>
>> The current throttle logic in the generic code only logs the THROTTLE
>> event. It relies on the specific driver implementation to disable
>> events. However, for all ARCHs, the implementation is similar. It only
>> disable the event, rather than the group.
>>
>> The logic to disable the group should be generic for all ARCHs. Add the
>> logic in the generic code. The following patch will remove the buggy
>> driver-specific implementation.
>>
>> The throttle only happens when an event is overflowed. Stop the entire
>> group when any event in the group triggers the throttle. Set the
>> MAX_INTERRUPTS to the leader event to indicate the group is throttled.
>>
>> The unthrottled could happen in 3 places.
>> - event/group sched. All events in the group are scheduled one by one.
>>   All of them will be unthrottled eventually. Nothing needs to be
>>   changed.
>> - The perf_adjust_freq_unthr_events for each tick. Needs to restart the
>>   group altogether.
>> - The __perf_event_period(). The whole group needs to be restarted
>>   altogether as well.
>>
>> With the fix,
>> $ sudo perf report -D | grep PERF_RECORD_SAMPLE -a4 | tail -n 5
>> 0 3573470770332 0x12f5f8 [0x70]: PERF_RECORD_SAMPLE(IP, 0x2):
>> ... sample_read:
>> .... group nr 2
>> ..... id 0000000000000a28, value 00000004fd3dfd8f, lost 0
>> ..... id 0000000000000a29, value 00000004fd3dfd8f, lost 0
> 
> Thanks Kan! The patches look good to me. As I understand it patches 2
> to 15 are just removing the logic where an event is unnecessarily
> stopped twice, so is it possible to test just this patch in isolation?
> Given the logic is generic it is applied to software events, so you
> should be able to repeat the problem with `perf record -e
> "{cpu-clock,cpu-clock}:S" ...` possibly by reducing the period or
> increasing the frequency. 

I don't think the exact same value between the two cpu-clock events in a
group can be got.
The SW clock events work differently. It relies on per-event hrtimer
rather than the overflow interrupt. I don't think there is a global
control for SW events. Strictly, it doesn't support "group".

The above result should only be observed on a PMU, e.g., Intel PMU, that
strictly supports "group".

For a PMU which doesn't strictly support "group", the patch should just
minimize the impact from the throttle logic. It's hard to demonstrate.

> This would be nice to show that it fixes the
> problem more generically than just the Intel PMU.

I only have Intel machines.
Hope people from AMD, ARM, or POWER can give it a try.

Thanks,
Kan>
> Ian
> 
>> Signed-off-by: Kan Liang <kan.liang@...ux.intel.com>
>> ---
>>  kernel/events/core.c | 55 +++++++++++++++++++++++++++++++++-----------
>>  1 file changed, 41 insertions(+), 14 deletions(-)
>>
>> diff --git a/kernel/events/core.c b/kernel/events/core.c
>> index a84abc2b7f20..eb0dc871f4f1 100644
>> --- a/kernel/events/core.c
>> +++ b/kernel/events/core.c
>> @@ -2734,6 +2734,38 @@ void perf_event_disable_inatomic(struct perf_event *event)
>>  static void perf_log_throttle(struct perf_event *event, int enable);
>>  static void perf_log_itrace_start(struct perf_event *event);
>>
>> +static void perf_event_group_unthrottle(struct perf_event *event, bool start_event)
>> +{
>> +       struct perf_event *leader = event->group_leader;
>> +       struct perf_event *sibling;
>> +
>> +       if (leader != event || start_event)
>> +               leader->pmu->start(leader, 0);
>> +       leader->hw.interrupts = 0;
>> +
>> +       for_each_sibling_event(sibling, leader) {
>> +               if (sibling != event || start_event)
>> +                       sibling->pmu->start(sibling, 0);
>> +               sibling->hw.interrupts = 0;
>> +       }
>> +
>> +       perf_log_throttle(leader, 1);
>> +}
>> +
>> +static void perf_event_group_throttle(struct perf_event *event)
>> +{
>> +       struct perf_event *leader = event->group_leader;
>> +       struct perf_event *sibling;
>> +
>> +       leader->hw.interrupts = MAX_INTERRUPTS;
>> +       leader->pmu->stop(leader, 0);
>> +
>> +       for_each_sibling_event(sibling, leader)
>> +               sibling->pmu->stop(sibling, 0);
>> +
>> +       perf_log_throttle(leader, 0);
>> +}
>> +
>>  static int
>>  event_sched_in(struct perf_event *event, struct perf_event_context *ctx)
>>  {
>> @@ -4389,10 +4421,8 @@ static void perf_adjust_freq_unthr_events(struct list_head *event_list)
>>                 hwc = &event->hw;
>>
>>                 if (hwc->interrupts == MAX_INTERRUPTS) {
>> -                       hwc->interrupts = 0;
>> -                       perf_log_throttle(event, 1);
>> -                       if (!event->attr.freq || !event->attr.sample_freq)
>> -                               event->pmu->start(event, 0);
>> +                       perf_event_group_unthrottle(event,
>> +                               !event->attr.freq || !event->attr.sample_freq);
>>                 }
>>
>>                 if (!event->attr.freq || !event->attr.sample_freq)
>> @@ -6421,14 +6451,6 @@ static void __perf_event_period(struct perf_event *event,
>>         active = (event->state == PERF_EVENT_STATE_ACTIVE);
>>         if (active) {
>>                 perf_pmu_disable(event->pmu);
>> -               /*
>> -                * We could be throttled; unthrottle now to avoid the tick
>> -                * trying to unthrottle while we already re-started the event.
>> -                */
>> -               if (event->hw.interrupts == MAX_INTERRUPTS) {
>> -                       event->hw.interrupts = 0;
>> -                       perf_log_throttle(event, 1);
>> -               }
>>                 event->pmu->stop(event, PERF_EF_UPDATE);
>>         }
>>
>> @@ -6436,6 +6458,12 @@ static void __perf_event_period(struct perf_event *event,
>>
>>         if (active) {
>>                 event->pmu->start(event, PERF_EF_RELOAD);
>> +               /*
>> +                * We could be throttled; unthrottle now to avoid the tick
>> +                * trying to unthrottle while we already re-started the event.
>> +                */
>> +               if (event->group_leader->hw.interrupts == MAX_INTERRUPTS)
>> +                       perf_event_group_unthrottle(event, false);
>>                 perf_pmu_enable(event->pmu);
>>         }
>>  }
>> @@ -10326,8 +10354,7 @@ __perf_event_account_interrupt(struct perf_event *event, int throttle)
>>         if (unlikely(throttle && hwc->interrupts >= max_samples_per_tick)) {
>>                 __this_cpu_inc(perf_throttled_count);
>>                 tick_dep_set_cpu(smp_processor_id(), TICK_DEP_BIT_PERF_EVENTS);
>> -               hwc->interrupts = MAX_INTERRUPTS;
>> -               perf_log_throttle(event, 0);
>> +               perf_event_group_throttle(event);
>>                 ret = 1;
>>         }
>>
>> --
>> 2.38.1
>>
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ