[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <200810292059.37995.nickpiggin@yahoo.com.au>
Date: Wed, 29 Oct 2008 20:59:37 +1100
From: Nick Piggin <nickpiggin@...oo.com.au>
To: Ingo Molnar <mingo@...e.hu>
Cc: Alan Cox <alan@...rguk.ukuu.org.uk>, Jiri Kosina <jkosina@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>,
David Miller <davem@...emloft.net>, rjw@...k.pl,
s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.
On Tuesday 28 October 2008 05:33, Ingo Molnar wrote:
> * Alan Cox <alan@...rguk.ukuu.org.uk> wrote:
> > > The way to get the best possible dbench numbers in CPU-bound dbench
> > > runs, you have to throw away the scheduler completely, and do this
> > > instead:
> > >
> > > - first execute all requests of client 1
> > > - then execute all requests of client 2
> > > ....
> > > - execute all requests of client N
> >
> > Rubbish. [...]
>
> i've actually implemented that about a decade ago: i've tracked down
> what makes dbench tick, i've implemented the kernel heuristics for it
> to make dbench scale linearly with the number of clients - just to be
> shot down by Linus about my utter rubbish approach ;-)
>
> > [...] If you do that you'll not get enough I/O in parallel to
> > schedule the disk well (not that most of our I/O schedulers are
> > doing the job well, and the vm writeback threads then mess it up and
> > the lack of Arjans ioprio fixes then totally screw you) </rant>
>
> the best dbench results come from systems that have enough RAM to
> cache the full working set, and a filesystem intelligent enough to not
> insert bogus IO serialization cycles (ext3 is not such a filesystem).
You can get good dbench results come from dbench on tmpfs, which
exercises the vm vfs scheduler etc without IO or filesystems.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists