[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200117085412.GU2827@hirez.programming.kicks-ass.net>
Date: Fri, 17 Jan 2020 09:54:12 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: kan.liang@...ux.intel.com
Cc: acme@...hat.com, mingo@...nel.org, linux-kernel@...r.kernel.org,
ak@...ux.intel.com, eranian@...gle.com
Subject: Re: [RESEND PATCH V2] perf/x86/intel: Avoid unnecessary PEBS_ENABLE
MSR access in PMI
On Thu, Jan 16, 2020 at 10:21:12AM -0800, kan.liang@...ux.intel.com wrote:
> A PMI may land after cpuc->enabled=0 in x86_pmu_disable() and
> PMI throttle may be triggered for the PMI. For this rare case,
> intel_pmu_pebs_disable() will not touch PEBS_ENABLE MSR. The patch
> explicitly disable the PEBS for this case.
intel_pmu_handle_irq()
pmu_enabled = cpuc->enabled;
cpuc->enabled = 0;
__intel_pmu_disable_all();
...
x86_pmu_stop()
intel_pmu_disable_event()
intel_pmu_pebs_disable()
if (cpuc->enabled) // FALSE!!!
cpuc->enabled = pmu_enabled;
if (pmu_enabled)
__intel_pmu_enable_all();
> @@ -2620,6 +2627,15 @@ static int handle_pmi_common(struct pt_regs *regs, u64 status)
> handled++;
> x86_pmu.drain_pebs(regs);
> status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
> +
> + /*
> + * PMI may land after cpuc->enabled=0 in x86_pmu_disable() and
> + * PMI throttle may be triggered for the PMI.
> + * For this rare case, intel_pmu_pebs_disable() will not touch
> + * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
> + */
> + if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
> + wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
> }
How does that make sense? AFAICT this is all still completely broken.
Please be more careful.
Powered by blists - more mailing lists