[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAM9d7cjy-T6VsPE1VSgYFWHOhKOA5dmDXwWExvXSeJXeE1Jt1Q@mail.gmail.com>
Date: Fri, 3 Nov 2023 18:14:32 -0700
From: Namhyung Kim <namhyung@...nel.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: mingo@...nel.org, linux-kernel@...r.kernel.org, acme@...nel.org,
mark.rutland@....com, alexander.shishkin@...ux.intel.com,
jolsa@...nel.org, irogers@...gle.com, adrian.hunter@...el.com
Subject: Re: [PATCH 11/13] perf: Simplify perf_adjust_freq_unthr_context()
On Thu, Nov 2, 2023 at 8:32 AM Peter Zijlstra <peterz@...radead.org> wrote:
>
>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
> ---
> kernel/events/core.c | 51 +++++++++++++++++++++++----------------------------
> 1 file changed, 23 insertions(+), 28 deletions(-)
>
> --- a/kernel/events/core.c
> +++ b/kernel/events/core.c
> @@ -4090,7 +4090,7 @@ perf_adjust_freq_unthr_context(struct pe
> if (!(ctx->nr_freq || unthrottle))
> return;
>
> - raw_spin_lock(&ctx->lock);
> + guard(raw_spinlock)(&ctx->lock);
>
> list_for_each_entry_rcu(event, &ctx->event_list, event_entry) {
> if (event->state != PERF_EVENT_STATE_ACTIVE)
> @@ -4100,7 +4100,7 @@ perf_adjust_freq_unthr_context(struct pe
> if (!event_filter_match(event))
> continue;
>
> - perf_pmu_disable(event->pmu);
> + guard(perf_pmu_disable)(event->pmu);
>
> hwc = &event->hw;
>
> @@ -4110,34 +4110,29 @@ perf_adjust_freq_unthr_context(struct pe
> event->pmu->start(event, 0);
> }
>
> - if (!event->attr.freq || !event->attr.sample_freq)
> - goto next;
> + if (event->attr.freq && event->attr.sample_freq) {
Any reason for this change? I think we can just change the
'goto next' to 'continue', no?
Also I think this code needs changes to optimize the access.
A similar reason for the cgroup switch, it accesses all the
pmu/events in the context even before checking the sampling
frequency. This is bad for uncore PMUs (and KVM too).
But this is a different issue..
Thanks,
Namhyung
> + /*
> + * stop the event and update event->count
> + */
> + event->pmu->stop(event, PERF_EF_UPDATE);
> +
> + now = local64_read(&event->count);
> + delta = now - hwc->freq_count_stamp;
> + hwc->freq_count_stamp = now;
> +
> + /*
> + * restart the event
> + * reload only if value has changed
> + * we have stopped the event so tell that
> + * to perf_adjust_period() to avoid stopping it
> + * twice.
> + */
> + if (delta > 0)
> + perf_adjust_period(event, period, delta, false);
>
> - /*
> - * stop the event and update event->count
> - */
> - event->pmu->stop(event, PERF_EF_UPDATE);
> -
> - now = local64_read(&event->count);
> - delta = now - hwc->freq_count_stamp;
> - hwc->freq_count_stamp = now;
> -
> - /*
> - * restart the event
> - * reload only if value has changed
> - * we have stopped the event so tell that
> - * to perf_adjust_period() to avoid stopping it
> - * twice.
> - */
> - if (delta > 0)
> - perf_adjust_period(event, period, delta, false);
> -
> - event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);
> - next:
> - perf_pmu_enable(event->pmu);
> + event->pmu->start(event, delta > 0 ? PERF_EF_RELOAD : 0);
> + }
> }
> -
> - raw_spin_unlock(&ctx->lock);
> }
>
> /*
>
>
Powered by blists - more mailing lists