[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <37D7C6CF3E00A74B8858931C1DB2F07750CA4064@SHSMSX103.ccr.corp.intel.com>
Date: Tue, 29 Nov 2016 14:46:14 +0000
From: "Liang, Kan" <kan.liang@...el.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: "mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"eranian@...gle.com" <eranian@...gle.com>,
"alexander.shishkin@...ux.intel.com"
<alexander.shishkin@...ux.intel.com>,
"ak@...ux.intel.com" <ak@...ux.intel.com>,
"Odzioba, Lukasz" <lukasz.odzioba@...el.com>
Subject: RE: [PATCH] perf/x86: fix event counter update issue
> So caveat that I'm ill and cannot think much..
>
> On Mon, Nov 28, 2016 at 11:26:46AM -0800, kan.liang@...el.com wrote:
>
> > Here, all the possible failure cases are listed.
> > Terms:
> > - new: current PMU counter value which read from rdpmcl.
> > - prev: previous counter value which is stored in &hwc->prev_count.
> > - in PMI/not in PMI: the event update happens in PMI handler or not.
>
> > Current code to calculate delta:
> > delta = (new << shift) - (prev << shift);
> > delta >>= shift;
> >
> > Case A: Not in PMI. new > prev. But delta is negative.
> > That's the failure case of Test 2.
> > delta is s64 type. new and prev are u64 type. If the new is big
> > enough, after doing left shift and sub, the bit 64 of the result may
> > still be 1.
> > After converting to s64, the sign flag will be set. Since delta is
> > s64 type, arithmetic right shift is applied, which copy the sign flag
> > into empty bit positions on the upper end of delta.
> > It can be fixed by adding the max count value.
> >
> > Here is the real data for test2 on KNL.
> > new = aea96e1927
> > prev = ffffff0000000001
> > delta = aea96e1927000000 - 1000000 = aea96e1926000000
> > aea96e1926000000 >> 24 = ffffffaea96e1926 << negative delta
>
> How can this happen? IIRC the thing increments, we program a negative
> value, and when it passes 0 we generate a PMI.
>
> And note that we _ALWAYS_ set the IN bits, even for !sampling events.
> Also note we set max_period to (1<<31) - 1, so we should never exceed 31
> bits.
>
The max_period is 0xfffffffff.
The limit is breaked by this patch.
069e0c3c4058 ("perf/x86/intel: Support full width counting")
https://patchwork.kernel.org/patch/2784191/
/* Support full width counters using alternative MSR range */
if (x86_pmu.intel_cap.full_width_write) {
x86_pmu.max_period = x86_pmu.cntval_mask;
x86_pmu.perfctr = MSR_IA32_PMC0;
pr_cont("full-width counters, ");
}
>
> > Case B: In PMI. new > prev. delta is positive.
> > That's the failure case of Test 3.
> > The PMI is triggered by overflow. But there is latency between
> > overflow and PMI handler. So new has small amount.
> > Current calculation lose the max count value.
>
> That doesn't make sense, per the 31bit limit.
>
>
> > Case C: In PMI. new < prev. delta is negative.
> > The PMU counter may be start from a big value. E.g. the fixed period
> > is small.
> > It can be fixed by adding the max count value.
>
> Doesn't make sense, how can this happen?
Powered by blists - more mailing lists