[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160203102247.GB5746@gmail.com>
Date: Wed, 3 Feb 2016 11:22:47 +0100
From: Ingo Molnar <mingo@...nel.org>
To: Andy Lutomirski <luto@...capital.net>
Cc: Arnaldo Carvalho de Melo <acme@...radead.org>,
Frederic Weisbecker <fweisbec@...il.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Rik van Riel <riel@...hat.com>,
Peter Zijlstra <peterz@...radead.org>, clark@...hat.com,
Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH] perf tooling: Add 'perf bench syscall' benchmark
* Andy Lutomirski <luto@...capital.net> wrote:
> On Jan 31, 2016 11:42 PM, "Ingo Molnar" <mingo@...nel.org> wrote:
> >
> >
> > * riel@...hat.com <riel@...hat.com> wrote:
> >
> > > (v3: address comments raised by Frederic)
> > >
> > > Running with nohz_full introduces a fair amount of overhead.
> > > Specifically, various things that are usually done from the
> > > timer interrupt are now done at syscall, irq, and guest
> > > entry and exit times.
> > >
> > > However, some of the code that is called every single time
> > > has only ever worked at jiffy resolution. The code in
> > > __acct_update_integrals was also doing some unnecessary
> > > calculations.
> > >
> > > Getting rid of the unnecessary calculations, without
> > > changing any of the functionality in __acct_update_integrals
> > > gets us about an 11% win.
> > >
> > > Not calling the time statistics updating code more than
> > > once per jiffy, like is done on housekeeping CPUs and on
> > > all the CPUs of a non-nohz_full system, shaves off a
> > > further 30%.
> > >
> > > I tested this series with a microbenchmark calling
> > > an invalid syscall number ten million times in a row,
> > > on a nohz_full cpu.
> > >
> > > Run times for the microbenchmark:
> > >
> > > 4.4 3.8 seconds
> > > 4.5-rc1 3.7 seconds
> > > 4.5-rc1 + first patch 3.3 seconds
> > > 4.5-rc1 + first 3 patches 3.1 seconds
> > > 4.5-rc1 + all patches 2.3 seconds
> >
> > Another suggestion (beyond fixing the 32-bit build ;-), could you please stick
> > your syscall microbenchmark into 'perf bench', so that we have a standardized way
> > of checking such numbers?
> >
> > In fact I'd suggest we introduce an entirely new sub-tool for system call
> > performance measurement - and this might be the first functionality of it.
> >
> > I've attached a quick patch that is basically a copy of 'perf bench numa' and
> > which measures getppid() performance (simple syscall where the result is not
> > cached by glibc).
> >
> > I kept the process, threading and memory allocation bits of numa.c, just in case
> > we need them to measure more complex syscalls. Maybe we could keep the threading
> > bits and remove the memory allocation parameters, to simplify the benchmark?
> >
> > Anyway, this could be a good base to start off on.
>
> So much code...
Arguably 90% of that should be factored out, as it's now a duplicate between
bench/numa.c and bench/syscall.c.
Technically, for a minimum benchmark, something like this would already be
functional for tools/perf/bench/syscall.c:
#include "../perf.h"
#include "../util/util.h"
#include "../builtin.h"
#include "bench.h"
static void run_syscall_benchmark(void)
{
[ .... your benchmark loop as-is ... ]
}
int bench_syscall(int argc __maybe_unused, const char **argv __maybe_unused, const char *prefix __maybe_unused)
{
run_syscall_benchmark();
switch (bench_format) {
case BENCH_FORMAT_DEFAULT:
printf("print results in human-readable format\n");
break;
case BENCH_FORMAT_SIMPLE:
printf("print results in machine-parseable format\n");
break;
default:
BUG_ON(1);
}
return 0;
}
Plus the small amount of glue for bench_sycall() I sent in the first patch.
Completely untested.
If the loop is long enough then even without any timing measurement this would be
usable via:
perf stat --null --repeat 10 perf bench syscall
as the 'perf stat' will do the timing and statistics.
> I'll try to take a look this week. It shouldn't be so hard to port my
> rdpmc-based widget over to this.
Sounds great to me!
Thanks,
Ingo
Powered by blists - more mailing lists