lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 29 Nov 2016 19:07:25 +0000
From:   "Liang, Kan" <kan.liang@...el.com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Stephane Eranian <eranian@...gle.com>
CC:     "mingo@...hat.com" <mingo@...hat.com>,
        LKML <linux-kernel@...r.kernel.org>,
        Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
        "ak@...ux.intel.com" <ak@...ux.intel.com>,
        "Odzioba, Lukasz" <lukasz.odzioba@...el.com>
Subject: RE: [PATCH] perf/x86: fix event counter update issue



> On Tue, Nov 29, 2016 at 09:20:10AM -0800, Stephane Eranian wrote:
> > Max period is limited by the number of bits the kernel can write to an
> MSR.
> > Used to be 31, now it is 47 for core PMU as per patch pointed to by Kan.
> 
> No, I think it sets it to 48 now, which is the problem. It should be 1 bit less
> than the total width.
> 
> So something like so.
> 
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index a74a2dbc0180..cb8522290e6a 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4034,7 +4034,7 @@ __init int intel_pmu_init(void)
> 
>  	/* Support full width counters using alternative MSR range */
>  	if (x86_pmu.intel_cap.full_width_write) {
> -		x86_pmu.max_period = x86_pmu.cntval_mask;
> +		x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
>  		x86_pmu.perfctr = MSR_IA32_PMC0;
>  		pr_cont("full-width counters, ");
>  	}

It doesn't work. 
perf stat -x, -C1 -e cycles -- sudo taskset 0x2 ./loop 100000000000
18446743727217821696,,cycles,313837854019,100.00

delta 0xffffff8000001803 new 0x1804 prev 0xffffff8000000001

I guess we need at least x86_pmu.cntval_mask >> 2 to prevent
the sign flag set.
I'm testing it now.


Also, no matter how it's fixed. I think we'd better add WARN_ONCE
for delta.

diff --git a/arch/x86/events/core.c b/arch/x86/events/core.c
index 6c3b0ef..2ce8299 100644
--- a/arch/x86/events/core.c
+++ b/arch/x86/events/core.c
@@ -100,6 +100,9 @@ u64 x86_perf_event_update(struct perf_event *event)
 	delta = (new_raw_count << shift) - (prev_raw_count << shift);
 	delta >>= shift;
 
+	WARN_ONCE((delta < 0), "counter increment must be positive. delta 0x%llx new 0x%llx prev 0x%llx\n",
+		  delta, new_raw_count, prev_raw_count);
+
 	local64_add(delta, &event->count);
 	local64_sub(delta, &hwc->period_left);

Thanks,
Kan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ