[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100118145346.GF10364@nowhere>
Date: Mon, 18 Jan 2010 15:53:47 +0100
From: Frederic Weisbecker <fweisbec@...il.com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Stephane Eranian <eranian@...gle.com>,
linux-kernel@...r.kernel.org, mingo@...e.hu, paulus@...ba.org,
davem@...emloft.net, perfmon2-devel@...ts.sf.net, eranian@...il.com
Subject: Re: [PATCH] perf_events: improve x86 event scheduling (v5)
On Mon, Jan 18, 2010 at 03:37:01PM +0100, Peter Zijlstra wrote:
> On Mon, 2010-01-18 at 15:20 +0100, Frederic Weisbecker wrote:
> >
> > > Then there's still the question of having events of multiple hw pmus in
> > > a single group, I'd be perfectly fine with saying that's not allowed,
> > > what to others think?
> >
> >
> > I guess we need that. It can be insteresting to couple
> > hardware counters with memory accesses...or whatever.
>
> That really depends on how easy it is to correlate events from the
> various pmus. This case could indeed do that, but the core vs uncore
> tihng is a lot less clear.
Not sure what you both mean by this core VS uncore thing :)
Is it about hardware counters that apply to single hardware threads
or shared among them inside a same core?
> > Perf stat combines cache miss counting with page faults,
> > cpu clock counters.
>
> perf stat also doesn't use groups and it still works quite nicely.
Ah? I thought it does.
> > We shouldn't limit such possibilities for technical/cleanliness
> > reasons. We should rather adapt.
>
> Maybe, I'm not a very big fan of groups myself, but they are clearly
> useful within a pmu, measuring cache misses through total-access for
> example, but the use between pmus is questionable.
Cross pmu, these seem to only make sense for non pinned groups.
If you want two non-pinned counters to be paired and not randomly
and separately scheduled.
For other cases, indeed I'm not sure it is useful :)
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists