[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1225109171.4238.23.camel@marge.simson.net>
Date: Mon, 27 Oct 2008 13:06:11 +0100
From: Mike Galbraith <efault@....de>
To: Alan Cox <alan@...rguk.ukuu.org.uk>
Cc: Ingo Molnar <mingo@...e.hu>, Jiri Kosina <jkosina@...e.cz>,
Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
David Miller <davem@...emloft.net>, rjw@...k.pl,
s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.
On Mon, 2008-10-27 at 11:33 +0000, Alan Cox wrote:
> > The way to get the best possible dbench numbers in CPU-bound dbench
> > runs, you have to throw away the scheduler completely, and do this
> > instead:
> >
> > - first execute all requests of client 1
> > - then execute all requests of client 2
> > ....
> > - execute all requests of client N
>
> Rubbish. If you do that you'll not get enough I/O in parallel to schedule
> the disk well (not that most of our I/O schedulers are doing the job
> well, and the vm writeback threads then mess it up and the lack of Arjans
> ioprio fixes then totally screw you) </rant>
>
> > the moment the clients are allowed to overlap, the moment their requests
> > are executed more fairly, the dbench numbers drop.
>
> Fairness isn't everything. Dbench is a fairly good tool for studying some
> real world workloads. If your fairness hurts throughput that much maybe
> your scheduler algorithm is just plain *wrong* as it isn't adapting to
> workload at all well.
Doesn't seem to be scheduler/fairness. 2.6.22.19 is O(1), and falls
apart too, I posted the numbers and full dbench output yesterday.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists