[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250516141739.GG412060@e132581.arm.com>
Date: Fri, 16 May 2025 15:17:39 +0100
From: Leo Yan <leo.yan@....com>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: peterz@...radead.org, mingo@...hat.com, namhyung@...nel.org,
irogers@...gle.com, mark.rutland@....com,
linux-kernel@...r.kernel.org, linux-perf-users@...r.kernel.org,
eranian@...gle.com, ctshao@...gle.com, tmricht@...ux.ibm.com
Subject: Re: [PATCH V2 01/15] perf: Fix the throttle logic for a group
On Fri, May 16, 2025 at 09:28:07AM -0400, Liang, Kan wrote:
[...]
> > Just a minor suggestion. Seems to me, the parameter "start" actually
> > means "only_enable_sibling". For more readable, the function can be
> > refine as:
> >
> > static void perf_event_unthrottle_group(struct perf_event *event,
> > bool only_enable_sibling)
> > {
> > struct perf_event *sibling, *leader = event->group_leader;
> >
> > perf_event_unthrottle(leader,
> > only_enable_sibling ? leader != event : true);
> > ...
> > }
> >
>
> It should work for the perf_adjust_freq_unthr_events(), which only start
> the leader.
> But it's possible that the __perf_event_period() update a
> sibling, not leader.
Should not perf_event_unthrottle_group() always enable sibling events?
The only difference is how the leader event to be enabled. It can be
enabled in perf_event_unthrottle_group() in period mode, or in
frequency mode due to a new period value is generated, the leader
event is enabled in perf_adjust_freq_unthr_events() or in
__perf_event_period().
This is why I suggested to rename the flag to only_enable_sibling:
true: only enable sibling events
false: enable all events (leader event and sibling events)
Or, we can rename the flag as "skip_start_event", means to skip
enabling the event specified in the argument.
> I think I can check the name to bool event_has_start.
> Is the name OK?
I am still confused for the naming "event_has_start" :)
What exactly does it mean?
> diff --git a/kernel/events/core.c b/kernel/events/core.c
> index a270fcda766d..b1cb07fa9c18 100644
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -2749,13 +2749,13 @@ static void perf_event_throttle(struct
> perf_event *event)
> perf_log_throttle(event, 0);
> }
>
> -static void perf_event_unthrottle_group(struct perf_event *event, bool
> start)
> +static void perf_event_unthrottle_group(struct perf_event *event, bool
> event_has_start)
> {
> struct perf_event *sibling, *leader = event->group_leader;
>
> - perf_event_unthrottle(leader, leader != event || start);
> + perf_event_unthrottle(leader, event_has_start ? leader != event : true);
> for_each_sibling_event(sibling, leader)
> - perf_event_unthrottle(sibling, sibling != event || start);
> + perf_event_unthrottle(sibling, event_has_start ? sibling != event :
> true);
> }
>
> static void perf_event_throttle_group(struct perf_event *event)
> @@ -4423,7 +4423,7 @@ static void perf_adjust_freq_unthr_events(struct
> list_head *event_list)
>
> if (hwc->interrupts == MAX_INTERRUPTS) {
> perf_event_unthrottle_group(event,
> - !event->attr.freq || !event->attr.sample_freq);
> + (event->attr.freq && event->attr.sample_freq));
> }
>
> if (!event->attr.freq || !event->attr.sample_freq)
> @@ -6466,7 +6466,7 @@ static void __perf_event_period(struct perf_event
> *event,
> * while we already re-started the event/group.
> */
> if (event->hw.interrupts == MAX_INTERRUPTS)
> - perf_event_unthrottle_group(event, false);
> + perf_event_unthrottle_group(event, true);
> perf_pmu_enable(event->pmu);
The logic in the updated code is correct for me.
Thanks,
Leo
Powered by blists - more mailing lists