[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87cyw6ykes.wl-maz@kernel.org>
Date: Sat, 18 Nov 2023 18:40:43 +0000
From: Marc Zyngier <maz@...nel.org>
To: Yury Norov <yury.norov@...il.com>
Cc: linux-kernel@...r.kernel.org, Will Deacon <will@...nel.org>,
Mark Rutland <mark.rutland@....com>,
linux-arm-kernel@...ts.infradead.org, Jan Kara <jack@...e.cz>,
Mirsad Todorovac <mirsad.todorovac@....unizg.hr>,
Matthew Wilcox <willy@...radead.org>,
Rasmus Villemoes <linux@...musvillemoes.dk>,
Andy Shevchenko <andriy.shevchenko@...ux.intel.com>,
Maxim Kuvyrkov <maxim.kuvyrkov@...aro.org>,
Alexey Klimov <klimov.linux@...il.com>
Subject: Re: [PATCH 31/34] drivers/perf: optimize m1_pmu_get_event_idx() by using find_bit() API
On Sat, 18 Nov 2023 15:51:02 +0000,
Yury Norov <yury.norov@...il.com> wrote:
>
> The function searches used_mask for a bit in a for-loop bit by bit.
> We can do it faster by using atomic find_and_set_bit().
Sure, let's do things fast. Correctness is overrated anyway.
>
> The comment to the function says that it searches for the first free
> counter, but obviously for_each_set_bit() searches for the first set
> counter.
No it doesn't. It iterates over the counters the event can count on.
> The following test_and_set_bit() tries to enable already set
> bit, which is weird.
Maybe you could try to actually read the code?
>
> This patch, by using find_and_set_bit(), fixes this automatically.
This doesn't fix anything, but instead actively breaks the driver.
>
> Fixes: a639027a1be1 ("drivers/perf: Add Apple icestorm/firestorm CPU PMU driver")
> Signed-off-by: Yury Norov <yury.norov@...il.com>
> ---
> drivers/perf/apple_m1_cpu_pmu.c | 8 ++------
> 1 file changed, 2 insertions(+), 6 deletions(-)
>
> diff --git a/drivers/perf/apple_m1_cpu_pmu.c b/drivers/perf/apple_m1_cpu_pmu.c
> index cd2de44b61b9..2d50670ffb01 100644
> --- a/drivers/perf/apple_m1_cpu_pmu.c
> +++ b/drivers/perf/apple_m1_cpu_pmu.c
> @@ -447,12 +447,8 @@ static int m1_pmu_get_event_idx(struct pmu_hw_events *cpuc,
> * counting on the PMU at any given time, and by placing the
> * most constraining events first.
> */
> - for_each_set_bit(idx, &affinity, M1_PMU_NR_COUNTERS) {
> - if (!test_and_set_bit(idx, cpuc->used_mask))
> - return idx;
> - }
> -
> - return -EAGAIN;
> + idx = find_and_set_bit(cpuc->used_mask, M1_PMU_NR_COUNTERS);
> + return idx < M1_PMU_NR_COUNTERS ? idx : -EAGAIN;
So now you're picking any possible counter, irrespective of the
possible affinity of the event. This is great.
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists