[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081027183312.GD11494@elte.hu>
Date: Mon, 27 Oct 2008 19:33:12 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Jiri Kosina <jkosina@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>,
David Miller <davem@...emloft.net>, rjw@...k.pl,
s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.
* Alan Cox <alan@...rguk.ukuu.org.uk> wrote:
> > The way to get the best possible dbench numbers in CPU-bound dbench
> > runs, you have to throw away the scheduler completely, and do this
> > instead:
> >
> > - first execute all requests of client 1
> > - then execute all requests of client 2
> > ....
> > - execute all requests of client N
>
> Rubbish. [...]
i've actually implemented that about a decade ago: i've tracked down
what makes dbench tick, i've implemented the kernel heuristics for it
to make dbench scale linearly with the number of clients - just to be
shot down by Linus about my utter rubbish approach ;-)
> [...] If you do that you'll not get enough I/O in parallel to
> schedule the disk well (not that most of our I/O schedulers are
> doing the job well, and the vm writeback threads then mess it up and
> the lack of Arjans ioprio fixes then totally screw you) </rant>
the best dbench results come from systems that have enough RAM to
cache the full working set, and a filesystem intelligent enough to not
insert bogus IO serialization cycles (ext3 is not such a filesystem).
The moment there's real IO it becomes harder to analyze but the same
basic behavior remains: the more unfair the IO scheduler, the "better"
dbench results we get.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists