[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20081027112750.GA2771@elte.hu>
Date: Mon, 27 Oct 2008 12:27:50 +0100
From: Ingo Molnar <mingo@...e.hu>
To: Jiri Kosina <jkosina@...e.cz>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Mike Galbraith <efault@....de>,
David Miller <davem@...emloft.net>, rjw@...k.pl,
s0mbre@...rvice.net.ru, linux-kernel@...r.kernel.org,
netdev@...r.kernel.org
Subject: Re: [tbench regression fixes]: digging out smelly deadmen.
* Jiri Kosina <jkosina@...e.cz> wrote:
> Ok, so another important datapoint:
>
> with c1e4fe711a4 (just before CFS has been merged for 2.6.23), the dbench
> throughput measures
>
> 187.7 MB/s
>
> in our testing conditions (default config).
>
> With c31f2e8a42c4 (just after CFS has been merged for 2.6.23), the
> throughput measured by dbench is
>
> 82.3 MB/s
>
> This is the huge drop we have been looking for. After this, the
> performance was still going down gradually, up to ~45 MS/ we are
> measuring for 2.6.27. But the biggest drop (more than 50%) points
> directly to CFS merge.
that is a well-known property of dbench: it rewards unfairness in IO,
memory management and scheduling.
The way to get the best possible dbench numbers in CPU-bound dbench
runs, you have to throw away the scheduler completely, and do this
instead:
- first execute all requests of client 1
- then execute all requests of client 2
....
- execute all requests of client N
the moment the clients are allowed to overlap, the moment their requests
are executed more fairly, the dbench numbers drop.
Ingo
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists