[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20161129183703.GC8388@tassilo.jf.intel.com>
Date: Tue, 29 Nov 2016 10:37:03 -0800
From: Andi Kleen <ak@...ux.intel.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>,
"Liang, Kan" <kan.liang@...el.com>,
"mingo@...hat.com" <mingo@...hat.com>,
LKML <linux-kernel@...r.kernel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
"Odzioba, Lukasz" <lukasz.odzioba@...el.com>
Subject: Re: [PATCH] perf/x86: fix event counter update issue
On Tue, Nov 29, 2016 at 06:30:55PM +0100, Peter Zijlstra wrote:
> On Tue, Nov 29, 2016 at 09:20:10AM -0800, Stephane Eranian wrote:
> > Max period is limited by the number of bits the kernel can write to an MSR.
> > Used to be 31, now it is 47 for core PMU as per patch pointed to by Kan.
>
> No, I think it sets it to 48 now, which is the problem. It should be 1
> bit less than the total width.
>
> So something like so.
That looks good. Kan can you test it?
-Andi
>
> diff --git a/arch/x86/events/intel/core.c b/arch/x86/events/intel/core.c
> index a74a2dbc0180..cb8522290e6a 100644
> --- a/arch/x86/events/intel/core.c
> +++ b/arch/x86/events/intel/core.c
> @@ -4034,7 +4034,7 @@ __init int intel_pmu_init(void)
>
> /* Support full width counters using alternative MSR range */
> if (x86_pmu.intel_cap.full_width_write) {
> - x86_pmu.max_period = x86_pmu.cntval_mask;
> + x86_pmu.max_period = x86_pmu.cntval_mask >> 1;
> x86_pmu.perfctr = MSR_IA32_PMC0;
> pr_cont("full-width counters, ");
> }
Powered by blists - more mailing lists