lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3e117702-c07f-bd58-9931-766c2698b5d7@linux.intel.com>
Date:   Fri, 15 Nov 2019 13:04:50 -0500
From:   "Liang, Kan" <kan.liang@...ux.intel.com>
To:     Peter Zijlstra <peterz@...radead.org>
Cc:     acme@...hat.com, mingo@...nel.org, linux-kernel@...r.kernel.org,
        ak@...ux.intel.com, eranian@...gle.com
Subject: Re: [PATCH] perf/x86/intel: Avoid PEBS_ENABLE MSR access in PMI



On 11/15/2019 9:46 AM, Liang, Kan wrote:
> 
> 
> On 11/15/2019 9:07 AM, Peter Zijlstra wrote:
>> On Fri, Nov 15, 2019 at 05:39:17AM -0800, kan.liang@...ux.intel.com 
>> wrote:
>>> From: Kan Liang <kan.liang@...ux.intel.com>
>>>
>>> The perf PMI handler, intel_pmu_handle_irq(), currently does
>>> unnecessary MSR accesses when PEBS is enabled.
>>>
>>> When entering the handler, global ctrl is explicitly disabled. All
>>> counters do not count anymore. It doesn't matter if the PEBS is
>>> enabled or disabled. Furthermore, cpuc->pebs_enabled is not changed
>>> in PMI. The PEBS status doesn't change. The PEBS_ENABLE MSR doesn't need
>>> to be changed either.
>>
>> PMI can throttle, and iirc x86_pmu_stop() ends up in
>> intel_pmu_pebs_disable()
>>
> 
> Right, the declaration is inaccurate. I will fix it in v2.
> But the patch still works for the case of PMI throttle.
> The intel_pmu_pebs_disable() will update cpuc->pebs_enabled and 
> unconditionally modify MSR_IA32_PEBS_ENABLE.

It seems not true for a corner case.
PMI may land after cpuc->enabled=0 in x86_pmu_disable() and PMI throttle 
may be triggered for the PMI. For this rare case, 
intel_pmu_pebs_disable() will not touch MSR_IA32_PEBS_ENABLE.

I don't have a test case for this rare corner case. But I think we may 
have to handle it in PMI handler as well. I will add the codes as below 
to V2.

diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
index bc6468329c52..7198a372a5ab 100644
--- a/arch/x86/events/intel/core.c
+++ b/arch/x86/events/intel/core.c
@@ -2620,6 +2624,15 @@ static int handle_pmi_common(struct pt_regs 
*regs, u64 status)
                 handled++;
                 x86_pmu.drain_pebs(regs);
                 status &= x86_pmu.intel_ctrl | GLOBAL_STATUS_TRACE_TOPAPMI;
+
+               /*
+                * PMI may land after cpuc->enabled=0 in 
x86_pmu_disable() and
+                * PMI throttle may be triggered for the PMI.
+                * For this rare case, intel_pmu_pebs_disable() will not 
touch
+                * MSR_IA32_PEBS_ENABLE. Explicitly disable the PEBS here.
+                */
+               if (unlikely(!cpuc->enabled && !cpuc->pebs_enabled))
+                       wrmsrl(MSR_IA32_PEBS_ENABLE, 0);
         }


Thanks,
Kan

> When exiting the handler, current perf re-write the PEBS MSR according 
> to the updated cpuc->pebs_enabled, which is still unnecessary.
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ