[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1290771419.2145.137.camel@laptop>
Date: Fri, 26 Nov 2010 12:36:59 +0100
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Stephane Eranian <eranian@...gle.com>
Cc: Lin Ming <lin@...g.vg>, Lin Ming <ming.m.lin@...el.com>,
Ingo Molnar <mingo@...e.hu>, Andi Kleen <andi@...stfloor.org>,
lkml <linux-kernel@...r.kernel.org>,
Frederic Weisbecker <fweisbec@...il.com>,
Arjan van de Ven <arjan@...radead.org>
Subject: Re: [RFC PATCH 2/3 v2] perf: Implement Nehalem uncore pmu
On Fri, 2010-11-26 at 12:25 +0100, Stephane Eranian wrote:
> On Fri, Nov 26, 2010 at 12:24 PM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> > On Fri, 2010-11-26 at 09:18 +0100, Stephane Eranian wrote:
> >
> >> In the perf_event model, given that any one of the 4 cores can be used
> >> to program uncore events, you have no choice but to broadcast to all
> >> 4 cores. Each has to demultiplex and figure out which of its counters
> >> have overflowed.
> >
> > Not really, you can redirect all these events to the first online cpu of
> > the node.
> >
> > You can re-write event->cpu in pmu::event_init(), and register cpu
> > hotplug notifiers to migrate the state around.
> >
> I am sure you could. But then the user thinks the event is controlled
> from CPUx when it's actually from CPUz. I am sure it can work but
> that's confusing, especially interrupt-wise.
Well, its either that or keeping a node wide state like we do for AMD
and serialize everything from there.
And I'm not sure what's most expensive, steering the interrupt to one
core only, or broadcasting every interrupt, I'd favour the first
approach.
The whole thing is a node-wide resource, so the user needs to think in
nodes anyway, we already do a cpu->node mapping for identifying the
thing.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists