[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111023155439.GA7286@localhost>
Date: Sun, 23 Oct 2011 23:54:39 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Trond Myklebust <Trond.Myklebust@...app.com>,
linux-nfs@...r.kernel.org
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>, Christoph Hellwig <hch@....de>,
Dave Chinner <david@...morbit.com>,
Greg Thelen <gthelen@...gle.com>,
Minchan Kim <minchan.kim@...il.com>,
Vivek Goyal <vgoyal@...hat.com>,
Andrea Righi <arighi@...eler.com>,
linux-mm <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 1/2] nfs: writeback pages wait queue
On Fri, Oct 21, 2011 at 12:05:30AM +0800, Wu Fengguang wrote:
> Trond,
>
> After applying these two patches, the IO-less patchset performances
> 45% better than the vanilla kernel and the average commit size only
> decreases by -16% in the common NFS-thresh=1G/nfs-1dd case :)
To better understand how the NFS writeback wait queue helps, I
visualized the network traffic over time. Attached are the graphs for
the vanilla kernel and the one with the IO-less + NFS wait queue
patches.
nfs-1dd-4k-32p-32016M-1024M:10-3.1.0-rc8-vanilla+/dstat-bw.png
nfs-1dd-4k-32p-31951M-1024M:10-3.1.0-rc8-nfs-wq4+/dstat-bw.png
The obvious difference is, the network traffic become now more
distributed and the "zero traffic" periods are mostly reduced.
The other 2dd, 10dd cases have similar results.
Thanks,
Fengguang
Download attachment "dstat-bw.png" of type "image/png" (23197 bytes)
Download attachment "dstat-bw.png" of type "image/png" (22149 bytes)
Powered by blists - more mailing lists