[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110422195445.GA17583@elte.hu>
Date: Fri, 22 Apr 2011 21:54:45 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Andi Kleen <ak@...ux.intel.com>
Cc: Arnaldo Carvalho de Melo <acme@...radead.org>,
linux-kernel@...r.kernel.org,
Peter Zijlstra <peterz@...radead.org>,
Stephane Eranian <eranian@...il.com>,
Lin Ming <ming.m.lin@...el.com>,
Arnaldo Carvalho de Melo <acme@...hat.com>
Subject: Re: [PATCH 1/1] perf tools: Add missing user space support for
config1/config2
* Andi Kleen <ak@...ux.intel.com> wrote:
> On Fri, Apr 22, 2011 at 08:34:29AM +0200, Ingo Molnar wrote:
> > This needs to be a *lot* more user friendly. Users do not want to type in
> > stupid hexa magic numbers to get profiling. We have moved beyond the oprofile
> > era really.
>
> I agree that the raw events are quite user unfriendly.
>
> Unfortunately they are the way of life in perf -- unlike oprofile --
> currently if you want any CPU specific events like this.
Not sure where you take that blanket statement from, but no, raw events are not
really the 'way of life' - judging by the various user feedback we get they
come up pretty rarely.
The thing is, most people just use the default 'perf record' and that's it -
they do not even care about a *single* event - they just want to profile their
code somehow.
Then the second most popular event category are the generalized events, the
ones you can see in perf list output:
cpu-cycles OR cycles [Hardware event]
instructions [Hardware event]
cache-references [Hardware event]
cache-misses [Hardware event]
branch-instructions OR branches [Hardware event]
branch-misses [Hardware event]
bus-cycles [Hardware event]
cpu-clock [Software event]
task-clock [Software event]
page-faults OR faults [Software event]
minor-faults [Software event]
major-faults [Software event]
context-switches OR cs [Software event]
cpu-migrations OR migrations [Software event]
alignment-faults [Software event]
emulation-faults [Software event]
L1-dcache-loads [Hardware cache event]
L1-dcache-load-misses [Hardware cache event]
L1-dcache-stores [Hardware cache event]
L1-dcache-store-misses [Hardware cache event]
L1-dcache-prefetches [Hardware cache event]
L1-dcache-prefetch-misses [Hardware cache event]
L1-icache-loads [Hardware cache event]
L1-icache-load-misses [Hardware cache event]
L1-icache-prefetches [Hardware cache event]
L1-icache-prefetch-misses [Hardware cache event]
LLC-loads [Hardware cache event]
LLC-load-misses [Hardware cache event]
LLC-stores [Hardware cache event]
LLC-store-misses [Hardware cache event]
LLC-prefetches [Hardware cache event]
LLC-prefetch-misses [Hardware cache event]
dTLB-loads [Hardware cache event]
dTLB-load-misses [Hardware cache event]
dTLB-stores [Hardware cache event]
dTLB-store-misses [Hardware cache event]
dTLB-prefetches [Hardware cache event]
dTLB-prefetch-misses [Hardware cache event]
iTLB-loads [Hardware cache event]
iTLB-load-misses [Hardware cache event]
branch-loads [Hardware cache event]
branch-load-misses [Hardware cache event]
These are useful but are used less frequently.
Then come tracepoint based events - and as a distant last, come raw events.
Yes, they raw events are useful occasionally, just like modifying applications
via a hexa editor is useful occasionally. If done often we better abstract it
out.
> Really to make sense out of all this you need per CPU full event lists.
To make sense out of what? You are making very sweeping yet vague statements.
> I have an own wrapper to make it more user friendly, but its functionality
> should arguably migrate into perf.
Uhm, no - your patch seem to reintroduce oprofile's horrible events files. We
really learned from that mistake and do not want to step back ...
Please see the detailed mails i wrote in this thread, what we want is to extend
and improve existing generalizations of events. The useful bits of the offcore
PMU fit nicely into that scheme.
Thanks,
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists