[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090907204458.GJ18599@kernel.dk>
Date: Mon, 7 Sep 2009 22:44:58 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Ingo Molnar <mingo@...e.hu>
Cc: Con Kolivas <kernel@...ivas.org>, linux-kernel@...r.kernel.org,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>
Subject: Re: BFS vs. mainline scheduler benchmarks and measurements
On Mon, Sep 07 2009, Jens Axboe wrote:
> > And yes, it would be wonderful to get a test-app from you that would
> > express the kind of pain you are seeing during compile jobs.
>
> I was hoping this one would, but it's not showing anything. I even added
> support for doing the ping and wakeup over a socket, to see if the pipe
> test was doing well because of the sync wakeup we do there. The net
> latency is a little worse, but still good. So no luck in making that app
> so far.
Here's a version that bounces timestamps between a producer and a number
of consumers (clients). Not really tested much, but perhaps someone can
compare this on a box that boots BFS and see what happens.
To run it, use -cX where X is the number of children that you wait for a
response from. The max delay between this children is logged for each
wakeup. You can invoke it ala:
$ ./latt -c4 'make -j4'
and it'll dump the max/avg/stddev bounce time after make has completed,
or if you just want to play around, start the compile in one xterm and
do:
$ ./latt -c4 'sleep 5'
to just log for a small period of time. Vary the number of clients to
see how that changes the aggregated latency. 1 should be fast, adding
more clients quickly adds up.
Additionally, it has a -f and -t option that controls the window of
sleep time for the parent between each message. The numbers are in
msecs, and it defaults to a minimum of 100msecs and up to 500msecs.
--
Jens Axboe
View attachment "latt.c" of type "text/x-csrc" (5562 bytes)
Powered by blists - more mailing lists