[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1112081324311.25737@cl320.eecs.utk.edu>
Date: Thu, 8 Dec 2011 13:34:04 -0500
From: Vince Weaver <vweaver1@...s.utk.edu>
To: <eranian@...il.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Ingo Molnar <mingo@...e.hu>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Paul Mackerras <paulus@...ba.org>,
Arnaldo Carvalho de Melo <acme@...stprotocols.net>
Subject: Re: perf_event self-monitoring overhead regression
On Wed, 7 Dec 2011, stephane eranian wrote:
> Are those the results for surrounding the ioctl() with rdtsc()?
>
> What do the axis actually represent?
>
The code looks like this:
start_before=rdtsc();
ret1=ioctl(fd[0], PERF_EVENT_IOC_ENABLE,0);
start_after=rdtsc();
Similar for stop and read. The events are set up in advance.
I've put up a page with some more graphs (though they're a bit rough yet)
and more background on what the tests are doing here:
http://web.eecs.utk.edu/~vweaver1/projects/perf-events/benchmarks/rdtsc_overhead/
The overall overhead results I'm seeing with rdtsc() are the same I saw
using the clock_gettime() calls.
The total overhead of start/stop/read seems to be getting worse with
each kernel release, but the individual start, stop and read calls don't
follow an obvious pattern.
I'm rebuilding stock kernels with the same compiler/config and I'll re-run
the tests but it might be a few days before I'll have those results.
Thanks,
Vince
vweaver1@...s.utk.edu
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists