[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101214141314.GC21257@basil.fritz.box>
Date: Tue, 14 Dec 2010 15:13:14 +0100
From: Andi Kleen <andi@...stfloor.org>
To: Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc: Lin Ming <ming.m.lin@...el.com>,
Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
Stephane Eranian <eranian@...gle.com>,
Andi Kleen <andi@...stfloor.org>, Ingo Molnar <mingo@...e.hu>,
Frederic Weisbecker <fweisbec@...il.com>,
Arjan van de Ven <arjan@...radead.org>,
lkml <linux-kernel@...r.kernel.org>,
Carl Love <cel@...ux.ibm.com>
Subject: Re: [RFC PATCH 3/3 v3] perf: Update perf tool to monitor uncore
events
On Tue, Dec 14, 2010 at 01:33:27PM +0100, Peter Zijlstra wrote:
> > > First of all, "uncore" is an x86-specific term and so it's not clear to
> > > me if you meant for all arches to utilize this encoding for all "not
> > > core but on the same die" events (IBM Power arch refers to this as
> > > "nest" logic).
>
> I don't think the x86 uncore matches the "not on core but on the same
> die" definition. The x86-uncore thing is more like a memory controller
> PMU (and since the memory controller is on die it is of course on die,
memory controller + interconnect + cache + power management + various other things.
Older x86 CPUs also had special PMUs on die for parts of that, but
without memory controller.
> but its not just any random on-die thing).
>
> The wire-speed thing has tons of special purpose 'cores' on die, each of
> them having a PMU.
Modern x86 CPUs also have other PMUs, at least in package (e.g. in the
GPU)
> Using the sysfs stuff you could actually expose each individually.
I expect this will be also needed on x86. Also there are x86 SOCs
where other parts of the SOC will have their own counters too.
So in general a flexible scheme to describe that is useful.
-Andi
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists