[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100915174012.GC5959@lenovo>
Date: Wed, 15 Sep 2010 21:40:12 +0400
From: Cyrill Gorcunov <gorcunov@...il.com>
To: Robert Richter <robert.richter@....com>
Cc: Stephane Eranian <eranian@...gle.com>, Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <peterz@...radead.org>,
Don Zickus <dzickus@...hat.com>,
"fweisbec@...il.com" <fweisbec@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"ying.huang@...el.com" <ying.huang@...el.com>,
"ming.m.lin@...el.com" <ming.m.lin@...el.com>,
"yinghai@...nel.org" <yinghai@...nel.org>,
"andi@...stfloor.org" <andi@...stfloor.org>
Subject: Re: [PATCH] perf, x86: catch spurious interrupts after disabling
counters
On Wed, Sep 15, 2010 at 07:28:05PM +0200, Robert Richter wrote:
> On 15.09.10 13:02:22, Cyrill Gorcunov wrote:
> > > what's for sure, is that you can have an interrupt in flight by the time
> > > you disable.
> > >
> >
> > I fear you can x86_pmu_stop()
> >
> > if (__test_and_clear_bit(hwc->idx, cpuc->active_mask)) {
> >
> > ---> active_mask will be cleared here for sure
> > ---> but counter still ticks, say nmi happens active_mask
> > ---> is cleared, but NMI can still happen and gets buffered
> > ---> before you disable counter in real
> >
> > x86_pmu.disable(event);
> > cpuc->events[hwc->idx] = NULL;
> > WARN_ON_ONCE(hwc->state & PERF_HES_STOPPED);
> > hwc->state |= PERF_HES_STOPPED;
> > }
> >
> > No?
>
> I tried reordering this too, but it didn't fix it.
>
> -Robert
>
Yeah, already noted from your previous email. Perhaps we might
do a bit simplier approach then -- in nmi handler were we mark
"next nmi" we could take into account not "one next" nmi but
sum of handled counters minus one being just handled (of course
cleaning this counter if new "non spurious" nmi came in), can't
say I like this approach but just a thought.
-- Cyrill
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists