[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100224044356.GA2007@localhost>
Date: Wed, 24 Feb 2010 12:43:56 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Dave Chinner <david@...morbit.com>
Cc: Trond Myklebust <Trond.Myklebust@...app.com>,
"linux-nfs@...r.kernel.org" <linux-nfs@...r.kernel.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [RFC] nfs: use 2*rsize readahead size
On Wed, Feb 24, 2010 at 12:24:14PM +0800, Dave Chinner wrote:
> On Wed, Feb 24, 2010 at 02:29:34PM +1100, Dave Chinner wrote:
> > On Wed, Feb 24, 2010 at 10:41:01AM +0800, Wu Fengguang wrote:
> > > With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS
> > > readahead size 512k*15=7680k is too large than necessary for typical
> > > clients.
> > >
> > > On a e1000e--e1000e connection, I got the following numbers
> > >
> > > readahead size throughput
> > > 16k 35.5 MB/s
> > > 32k 54.3 MB/s
> > > 64k 64.1 MB/s
> > > 128k 70.5 MB/s
> > > 256k 74.6 MB/s
> > > rsize ==> 512k 77.4 MB/s
> > > 1024k 85.5 MB/s
> > > 2048k 86.8 MB/s
> > > 4096k 87.9 MB/s
> > > 8192k 89.0 MB/s
> > > 16384k 87.7 MB/s
> > >
> > > So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight)
> > > can already get near full NFS bandwidth.
> > >
> > > The test script is:
> > >
> > > #!/bin/sh
> > >
> > > file=/mnt/sparse
> > > BDI=0:15
> > >
> > > for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384
> > > do
> > > echo 3 > /proc/sys/vm/drop_caches
> > > echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb
> > > echo readahead_size=${rasize}k
> > > dd if=$file of=/dev/null bs=4k count=1024000
> > > done
> >
> > That's doing a cached read out of the server cache, right? You
> > might find the results are different if the server has to read the
> > file from disk. I would expect reads from the server cache not
> > to require much readahead as there is no IO latency on the server
> > side for the readahead to hide....
>
> FWIW, if you mount the client with "-o rsize=32k" or the server only
> supports rsize <= 32k then this will probably hurt throughput a lot
> because then readahead will be capped at 64k instead of 480k....
I should have mentioned that in changelog.. Hope the updated one
helps.
Thanks,
Fengguang
---
nfs: use 2*rsize readahead size
With default rsize=512k and NFS_MAX_READAHEAD=15, the current NFS
readahead size 512k*15=7680k is too large than necessary for typical
clients.
On a e1000e--e1000e connection, I got the following numbers
(this reads sparse file from server and involves no disk IO)
readahead size throughput
16k 35.5 MB/s
32k 54.3 MB/s
64k 64.1 MB/s
128k 70.5 MB/s
256k 74.6 MB/s
rsize ==> 512k 77.4 MB/s
1024k 85.5 MB/s
2048k 86.8 MB/s
4096k 87.9 MB/s
8192k 89.0 MB/s
16384k 87.7 MB/s
So it seems that readahead_size=2*rsize (ie. keep two RPC requests in flight)
can already get near full NFS bandwidth.
To avoid small readahead when the client mount with "-o rsize=32k" or
the server only supports rsize <= 32k, we take the max of 2*rsize and
default_backing_dev_info.ra_pages. The latter defaults to 512K, and
will be auto scaled down when system memory is less than 512M, or can
be explicitly changed by user with kernel parameter "readahead=".
The test script is:
#!/bin/sh
file=/mnt/sparse
BDI=0:15
for rasize in 16 32 64 128 256 512 1024 2048 4096 8192 16384
do
echo 3 > /proc/sys/vm/drop_caches
echo $rasize > /sys/devices/virtual/bdi/$BDI/read_ahead_kb
echo readahead_size=${rasize}k
dd if=$file of=/dev/null bs=4k count=1024000
done
CC: Dave Chinner <david@...morbit.com>
CC: Trond Myklebust <Trond.Myklebust@...app.com>
Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
---
fs/nfs/client.c | 4 +++-
fs/nfs/internal.h | 8 --------
2 files changed, 3 insertions(+), 9 deletions(-)
--- linux.orig/fs/nfs/client.c 2010-02-23 11:15:44.000000000 +0800
+++ linux/fs/nfs/client.c 2010-02-24 10:16:00.000000000 +0800
@@ -889,7 +889,9 @@ static void nfs_server_set_fsinfo(struct
server->rpages = (server->rsize + PAGE_CACHE_SIZE - 1) >> PAGE_CACHE_SHIFT;
server->backing_dev_info.name = "nfs";
- server->backing_dev_info.ra_pages = server->rpages * NFS_MAX_READAHEAD;
+ server->backing_dev_info.ra_pages = max_t(unsigned long,
+ default_backing_dev_info.ra_pages,
+ 2 * server->rpages);
server->backing_dev_info.capabilities |= BDI_CAP_ACCT_UNSTABLE;
if (server->wsize > max_rpc_payload)
--- linux.orig/fs/nfs/internal.h 2010-02-23 11:15:44.000000000 +0800
+++ linux/fs/nfs/internal.h 2010-02-23 13:26:00.000000000 +0800
@@ -10,14 +10,6 @@
struct nfs_string;
-/* Maximum number of readahead requests
- * FIXME: this should really be a sysctl so that users may tune it to suit
- * their needs. People that do NFS over a slow network, might for
- * instance want to reduce it to something closer to 1 for improved
- * interactive response.
- */
-#define NFS_MAX_READAHEAD (RPC_DEF_SLOT_TABLE - 1)
-
/*
* Determine if sessions are in use.
*/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists