[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YMujZ7a/8ToWXzo+@hirez.programming.kicks-ass.net>
Date: Thu, 17 Jun 2021 21:32:55 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: "Liang, Kan" <kan.liang@...ux.intel.com>
Cc: mingo@...hat.com, linux-kernel@...r.kernel.org, acme@...nel.org,
mark.rutland@....com, ak@...ux.intel.com,
alexander.shishkin@...ux.intel.com, namhyung@...nel.org,
jolsa@...hat.com
Subject: Re: [PATCH 0/4] perf: Fix the ctx->pmu for a hybrid system
On Thu, Jun 17, 2021 at 10:10:37AM -0400, Liang, Kan wrote:
> I think all the perf_sw_context PMUs share the same pmu_cpu_context. so the
> cpuctx->ctx.pmu should be always the first registered perf_sw_context PMU
> which is perf_swevent. The ctx->pmu could be another software PMU.
Is there actually anything that relies on that? IIRC the sw pmus only
use event->pmu->foo() methods (exactly because the ctx->pmu is
unreliable for them).
> In theory, the perf_sw_context PMUs should have a similar issue. If the
> events are from different perf_sw_context PMUs, we should perf_pmu_disable()
> all of the PMUs before schedule them, but the ctx->pmu only tracks the first
> one.
>
> I don't have a good way to fix the perf_sw_context PMUs. I think we have to
> go through the event list and find all PMUs. But I don't think it's worth
> doing.
Yeah, the software PMUs are misserable, they're one of the things I wish
I'd done differently. Cleaning that up is *somewhere* on the TODO list.
So I *think* it should work as is and we can avoid the extra check, but
let me know what actual testing does.
Powered by blists - more mailing lists