[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100305164615.GY13205@erda.amd.com>
Date: Fri, 5 Mar 2010 17:46:15 +0100
From: Robert Richter <robert.richter@....com>
To: Andi Kleen <andi@...stfloor.org>
CC: Ingo Molnar <mingo@...e.hu>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Frédéric Weisbecker <fweisbec@...il.com>,
Mike Galbraith <efault@....de>,
LKML <linux-kernel@...r.kernel.org>,
Arnaldo Carvalho de Melo <acme@...hat.com>,
Pekka Enberg <penberg@...helsinki.fi>,
Paul Mackerras <paulus@...ba.org>,
oprofile-list <oprofile-list@...ts.sourceforge.net>,
"H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Arjan van de Ven <arjan@...radead.org>,
Steven Rostedt <rostedt@...dmis.org>
Subject: Re: [PATCH 0/26] oprofile: Performance counter multiplexing
On 26.02.10 15:51:04, Andi Kleen wrote:
> That said the biggest problem with oprofile right now that the
> new buffer it's using is quite a lot less reliable and drops
> events left and right on any non trivial load. That makes
> oprofile very unreliable, especially in call graph mode.
(cc'ing Steve)
Andi,
the tests I run with oprofile do not indicate unreliable ring_buffer
behavior. Maybe my use cases and loads are different. Can you describe
a setup where this may happen for sure? What is the impact, do you
have lost samples or inconsistent buffer data? Is the data loss in
kernel or user space? Also, I am not aware that the ring_buffer is
unreliable for ftrace or tracepoints, where it is heavily used. I
really want to find the root cause here.
Thanks,
-Robert
--
Advanced Micro Devices, Inc.
Operating System Research Center
email: robert.richter@....com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists