[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240620070649.GQ31592@noisy.programming.kicks-ass.net>
Date: Thu, 20 Jun 2024 09:06:49 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: kan.liang@...ux.intel.com
Cc: mingo@...nel.org, acme@...nel.org, namhyung@...nel.org,
irogers@...gle.com, adrian.hunter@...el.com,
alexander.shishkin@...ux.intel.com, linux-kernel@...r.kernel.org,
ak@...ux.intel.com, eranian@...gle.com,
Sandipan Das <sandipan.das@....com>,
Ravi Bangoria <ravi.bangoria@....com>,
silviazhao <silviazhao-oc@...oxin.com>,
CodyYao-oc <CodyYao-oc@...oxin.com>
Subject: Re: [RESEND PATCH 02/12] perf/x86: Support counter mask
On Tue, Jun 18, 2024 at 08:10:34AM -0700, kan.liang@...ux.intel.com wrote:
> + for_each_set_bit(idx, c->idxmsk, x86_pmu_num_counters(NULL)) {
> if (new == -1 || hwc->idx == idx)
> /* assign free slot, prefer hwc->idx */
> old = cmpxchg(nb->owners + idx, NULL, event);
> +static inline int x86_pmu_num_counters_fixed(struct pmu *pmu)
> +{
> + return hweight64(hybrid(pmu, fixed_cntr_mask64));
> +}
This is wrong. You don't iterate a bitmask by the number of bits set,
but by the highest set bit in the mask.
Powered by blists - more mailing lists