[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1249989717.17467.159.camel@twins>
Date: Tue, 11 Aug 2009 13:21:57 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Ingo Molnar <mingo@...e.hu>
Cc: Johannes Stezenbach <js@...21.net>, linux-kernel@...r.kernel.org,
Steven Rostedt <rostedt@...dmis.org>,
Frédéric Weisbecker <fweisbec@...il.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch] cache-miss and cache-refs events on P6-mobile CPUs
On Tue, 2009-08-11 at 13:06 +0200, Ingo Molnar wrote:
> * Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
>
> > On Tue, 2009-08-11 at 11:34 +0200, Ingo Molnar wrote:
> >
> > > @@ -116,8 +116,8 @@ static const u64 p6_perfmon_event_map[]
> > > {
> > > [PERF_COUNT_HW_CPU_CYCLES] = 0x0079,
> > > [PERF_COUNT_HW_INSTRUCTIONS] = 0x00c0,
> > > - [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0000,
> > > - [PERF_COUNT_HW_CACHE_MISSES] = 0x0000,
> > > + [PERF_COUNT_HW_CACHE_REFERENCES] = 0x0f2e,
> > > + [PERF_COUNT_HW_CACHE_MISSES] = 0x012e,
> >
> > 2e is total numer of L2 events,
> >
> > 0f is all mesi states
> > 01 is invalid states
>
> here's Intel's own description:
>
> I_STATE 0x01 Counts how many times requests miss the cache.
> MESI 0x0F Counts how many times cache lines in any state are accessed.
>
> so it's pretty close in practice. The only counts that are a bit
> inapplicable are fetches/prefetches it initiates on its own (they
> are included here) - but those too are related to the workload in
> general, so it's good as an approximation.
>
> It's definitely better than 0x00 IMO. What do you think?
Well, if they say so. I was thinking that counting I states would count
invalidates due to remote S->{E,M} transitions and invpg ins' and such.
And hitting an invalidated line is a whole different thing than plain
missing it due to it not being present.
Anyway, if this is the Intel recommended thing for cache misses, who am
I to argue.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists