[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1193412540.27652.9.camel@twins>
Date: Fri, 26 Oct 2007 17:29:00 +0200
From: Peter Zijlstra <a.p.zijlstra@...llo.nl>
To: Ingo Molnar <mingo@...e.hu>
Cc: Martin Knoblauch <spamtrap@...bisoft.de>,
linux-kernel@...r.kernel.org, Fengguang Wu <wfg@...l.ustc.edu.cn>,
Andrew Morton <akpm@...ux-foundation.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Rik van Riel <riel@...hat.com>
Subject: Re: 2.6.24-rc1: First impressions
On Fri, 2007-10-26 at 17:22 +0200, Ingo Molnar wrote:
> * Martin Knoblauch <spamtrap@...bisoft.de> wrote:
>
> > Hi ,
> >
> > just to give some feedback on 2.6.24-rc1. For some time I am tracking
> > IO/writeback problems that hurt system responsiveness big-time. I
> > tested Peters stuff together with Fenguangs additions and it looked
> > promising. Therefore I was very happy to see Peters stuff going into
> > 2.6.24 and waited eagerly for rc1. In short, I am impressed. This
> > really looks good. IO throughput is great and I could not reproduce
> > the responsiveness problems so far.
> >
> > Below are a some numbers of my brute-force I/O tests that I can use
> > to bring responsiveness down. My platform is a HP/DL380g4, dual CPUs,
> > HT-enabled, 8 GB Memory, SmartaArray6i controller with 4x72GB SCSI
> > disks as RAID5 (battery protected writeback cahe enabled) and gigabit
> > networking (tg3). User space is 64-bit RHEL4.3
> >
> > I am basically doing copies using "dd" with 1MB blocksize. Local
> > Filesystem ist ext2 (noatime). IO-Scheduler is dealine, as it tends
> > to give best results. NFS3 Server is a Sun/T2000/Solaris10. The tests
> > are:
> >
> > dd1 - copy 16 GB from /dev/zero to local FS
> > dd1-dir - same, but using O_DIRECT for output
> > dd2/dd2-dir - copy 2x7.6 GB in parallel from /dev/zero to local FS
> > dd3/dd3-dir - copy 3x5.2 GB in parallel from /dev/zero lo local FS
> > net1 - copy 5.2 GB from NFS3 share to local FS
> > mix3 - copy 3x5.2 GB from /dev/zero to local disk and two NFS3 shares
> >
> > I did the numbers for 2.6.19.2, 2.6.22.6 and 2.6.24-rc1. All units
> > are MB/sec.
> >
> > test 2.6.19.2 2.6.22.6 2.6.24.-rc1
> > ----------------------------------------------------------------
> > dd1 28 50 96
> > dd1-dir 88 88 86
> > dd2 2x16.5 2x11 2x44.5
> > dd2-dir 2x44 2x44 2x43
> > dd3 3x9.8 3x8.7 3x30
> > dd3-dir 3x29.5 3x29.5 3x28.5
> > net1 30-33 50-55 37-52
> > mix3 17/32 25/50 96/35 (disk/combined-network)
>
> wow, really nice results! Peter does know how to make stuff fast :) Now
> lets pick up some of Peter's other, previously discarded patches as well
> :-)
>
> Such as the rewritten reclaim (clockpro) patches:
>
> http://programming.kicks-ass.net/kernel-patches/page-replace/
I think riel is taking over that stuff with his split vm and policies
per type.
> The improve-swap-performance (swap-token) patches:
>
> http://programming.kicks-ass.net/kernel-patches/swap_token/
Ashwin's version did get upstreamed.
> His enable-swap-over-NFS [and other complex IO transports] patches:
>
> http://programming.kicks-ass.net/kernel-patches/vm_deadlock/
Will post that one again, soonish.... Esp. after Linus professed liking
to have swap over NFS.
I've been working on improving the changelogs and comments in that code.
latest code (somewhat raw, as rushed by ingo posting this) in:
http://programming.kicks-ass.net/kernel-patches/vm_deadlock/v2.6.23-mm1/
> And the concurrent pagecache patches:
>
> http://programming.kicks-ass.net/kernel-patches/concurrent-pagecache/
>
> as a starter :-) I think the MM should get out of deep-feature-freeze
> mode - there's tons of room to improve :-/
Yeah, that one would be cool, but it depends on Nick getting his
lockless pagecache upstream. For those who don't know, both are in -rt
(and have been for some time) so it's not unproven code.
Download attachment "signature.asc" of type "application/pgp-signature" (190 bytes)
Powered by blists - more mailing lists