[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090907221812.GA1700@elte.hu>
Date: Tue, 8 Sep 2009 00:18:12 +0200
From: Ingo Molnar <mingo@...e.hu>
To: Jens Axboe <jens.axboe@...cle.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Avi Kivity <avi@...hat.com>, Con Kolivas <kernel@...ivas.org>,
linux-kernel@...r.kernel.org, Mike Galbraith <efault@....de>
Subject: Re: BFS vs. mainline scheduler benchmarks and measurements
* Jens Axboe <jens.axboe@...cle.com> wrote:
> On Mon, Sep 07 2009, Peter Zijlstra wrote:
> > On Mon, 2009-09-07 at 22:46 +0200, Jens Axboe wrote:
> > > > a bug in the SMP load-balancer that can cause interactivity problems
> > > > on large CPU count systems.
> > >
> > > Worth trying on the dual core box?
> >
> > I debugged the issue on a dual core :-)
> >
> > It should be more pronounced on larger machines, but its present on
> > dual-core too.
>
> Alright, I'll upgrade that box to -tip tomorrow and see if it
> makes a noticable difference. At -j4 or higher, I can literally
> see windows slowly popping up when switching to a different
> virtual desktop.
btw., if you run -tip and have these enabled:
CONFIG_PERF_COUNTER=y
CONFIG_EVENT_TRACING=y
cd tools/perf/
make -j install
... then you can use a couple of new perfcounters features to
measure scheduler latencies. For example:
perf stat -e sched:sched_stat_wait -e task-clock ./hackbench 20
Will tell you how many times this workload got delayed by waiting
for CPU time.
You can repeat the workload as well and see the statistical
properties of those metrics:
aldebaran:/home/mingo> perf stat --repeat 10 -e \
sched:sched_stat_wait:r -e task-clock ./hackbench 20
Time: 0.251
Time: 0.214
Time: 0.254
Time: 0.278
Time: 0.245
Time: 0.308
Time: 0.242
Time: 0.222
Time: 0.268
Time: 0.244
Performance counter stats for './hackbench 20' (10 runs):
59826 sched:sched_stat_wait # 0.026 M/sec ( +- 5.540% )
2280.099643 task-clock-msecs # 7.525 CPUs ( +- 1.620% )
0.303013390 seconds time elapsed ( +- 3.189% )
To get scheduling events, do:
# perf list 2>&1 | grep sched:
sched:sched_kthread_stop [Tracepoint event]
sched:sched_kthread_stop_ret [Tracepoint event]
sched:sched_wait_task [Tracepoint event]
sched:sched_wakeup [Tracepoint event]
sched:sched_wakeup_new [Tracepoint event]
sched:sched_switch [Tracepoint event]
sched:sched_migrate_task [Tracepoint event]
sched:sched_process_free [Tracepoint event]
sched:sched_process_exit [Tracepoint event]
sched:sched_process_wait [Tracepoint event]
sched:sched_process_fork [Tracepoint event]
sched:sched_signal_send [Tracepoint event]
sched:sched_stat_wait [Tracepoint event]
sched:sched_stat_sleep [Tracepoint event]
sched:sched_stat_iowait [Tracepoint event]
stat_wait/sleep/iowait would be the interesting ones, for latency
analysis.
Or, if you want to see all the specific delays and want to see
min/max/avg, you can do:
perf record -e sched:sched_stat_wait:r -f -R -c 1 ./hackbench 20
perf trace
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists