[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <497106C5.60703@us.ibm.com>
Date: Fri, 16 Jan 2009 16:14:29 -0600
From: Maynard Johnson <maynardj@...ibm.com>
To: Ingo Molnar <mingo@...e.hu>
CC: Corey Ashford <cjashfor@...ux.vnet.ibm.com>,
Andi Kleen <andi@...stfloor.org>,
Paul Mackerras <paulus@...ba.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Andrew Morton <akpm@...ux-foundation.org>,
Stephane Eranian <eranian@...glemail.com>,
Eric Dumazet <dada1@...mosbay.com>,
Robert Richter <robert.richter@....com>,
Arjan van de Ven <arjan@...radead.org>,
Peter Anvin <hpa@...or.com>,
"David S. Miller" <davem@...emloft.net>,
perfctr-devel@...ts.sourceforge.net
Subject: Re: [patch] Performance Counters for Linux, v4
Corey Ashford wrote:
> Andi Kleen wrote:
>> Paul Mackerras <paulus@...ba.org> writes:
>>> The perf counter subsystem will, in Ingo's design, naturally try to
>>> schedule as many counters and groups on as it can. Given a list of
>>> counters/groups, it could start with the first and keep on trying to
>>> add counters or groups while it can, essentially trying all possible
>>> combinations until it either fills up all the hardware counters or
>>> exhausts the possible combinations. If it moves all the
>>> counters/groups that do fit on up to the head of the list, and then
>>> rotates them to the back of the list when the timeslice expires, that
>>> would probably be OK. In fact the computation about what set of
>>> counters/groups to put on should be done when adding/removing a
>>> counter/group and when the timeslice expires, rather than at context
>>> switch time. (I'm talking about the list of part-time counters/groups
>>> here, of course.)
>> One issue is that PMU counts can cover more than one CPU. One example
>> for this are the Uncore events on Nehalem (which cover a whole socket)
>> or when you are in AnyThreads monitoring mode (then you get events
>> from both SMT siblings in a core)
>>
>> With that you would need to examine other CPU's state at context switch
>> time. Probably not a good idea for scalability.
>>
>> -Andi
>>
>
> Over time, it seems clear that we will see multi-core processor designs
> with increasingly large uncore/nest facilities, so this could become
> more and more of an issue.
Ingo, I'll add my voice to the chorus here. To reiterate the point, some PMUs count events that are external to the processor cores, and these events cannot be attributed to any one particular CPU -- and certainly not to a particular pid. The current interface has a restriction that the user cannot pass -1 for both pid and cpu. But it seems to me that's exactly what would be needed for such off-core events. Can this feature fit in with the current interface or is some sort of extension needed?
Thanks.
-Maynard
>
> - Corey
>
> Corey Ashford
> Software Engineer
> IBM Linux Technology Center, Linux Toolchain
> Beaverton, OR
> 503-578-3507
> cjashfor@...ibm.com
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists