[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4B620AD4.8000108@linux.vnet.ibm.com>
Date: Thu, 28 Jan 2010 14:08:20 -0800
From: Corey Ashford <cjashfor@...ux.vnet.ibm.com>
To: Peter Zijlstra <peterz@...radead.org>
CC: Ingo Molnar <mingo@...e.hu>, LKML <linux-kernel@...r.kernel.org>,
Andi Kleen <andi@...stfloor.org>,
Paul Mackerras <paulus@...ba.org>,
Stephane Eranian <eranian@...glemail.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Xiao Guangrong <xiaoguangrong@...fujitsu.com>,
Dan Terpstra <terpstra@...s.utk.edu>,
Philip Mucci <mucci@...s.utk.edu>,
Maynard Johnson <mpjohn@...ibm.com>,
Carl Love <cel@...ibm.com>,
Steven Rostedt <rostedt@...dmis.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Masami Hiramatsu <mhiramat@...hat.com>
Subject: Re: [RFC] perf_events: support for uncore a.k.a. nest units
On 1/28/2010 11:06 AM, Peter Zijlstra wrote:
> On Thu, 2010-01-28 at 10:00 -0800, Corey Ashford wrote:
>>
>> I don't quite get what you're saying here. Perhaps you are thinking
>> that all uncore units are associated with a particular cpu node, or a
>> set of cpu nodes? And that there's only one uncore unit per cpu (or set
>> of cpus) that needs to be addressed, i.e. no ambiguity?
>
> Well, I was initially thinking of the intel uncore thing which is memory
> controller, so node, level.
>
> But all system topology bound pmus can be done that way.
>
>> That is not going to be the case for all systems. We can have uncore
>> units that are associated with the entire system,
>
> Right, but that's simple too.
>
>> for example PMUs in an I/O device.
>
>> And we can have multiple uncore units of a particular
>> type, for example multiple vector coprocessors, each with its own PMU,
>> and are associated with a single cpu or a set of cpus.
>>
>> perf_events needs an addressing scheme that covers these cases.
>
> You could possible add a u64 pmu_id field to perf_event_attr and use
> that together with things like:
>
> PERF_TYPE_PCI, attr.pmu_id = domain:bus:device:function encoding
> PERF_TYPE_SPU, attr.pmu_id = spu-id
>
Thank you for the clarification.
One of Ingo's comments in this thread was that he wants perf to be able to
display to the user the available PMUs along with their respect events. That
perf would parse some machine-independent data structure (somewhere) to get this
info. This same info would provide the user a method of specifying which PMU
he wants to address. He'd also like all of the event info data to reside in the
same place. I hope I am paraphrasing him correctly.
I can see that with the scheme you have proposed above, it would be
straight-forward to encode PMU ids for a particular new PERF_TYPE_* system
topology, but I don't see a clear way of providng perf with enough information
to tell it which particular topology is being used, how many units of each PMU
type exist, and so on. Do you have any ideas as to how to accomplish this goal
with the method you are suggesting?
This is one of the reasons why I am leaning toward a /sys/devices-style data
structure; the kernel could easily build it based on the pmus that it discovers
(through whatever means), and the user can fairly easily choose a pmu from this
structure to open, and it's unambiguous to the kernel as to which pmu the user
really wants.
I am not convinced that this is the right place to put the event info for each PMU.
> But before we go there the perf core needs to be extended to deal with
> multiple hardware pmus, something which isn't too hard but we need to be
> careful not to bloat the normal code paths for these somewhat esoteric
> use cases.
>
Is this something you've looked into? If so, what sort of issues have you
discovered?
Thanks,
- Corey
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists