lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <20111120015755.GA7161@localhost>
Date:	Sun, 20 Nov 2011 09:57:55 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Jim Rees <rees@...ch.edu>
Cc:	Trond Myklebust <Trond.Myklebust@...app.com>,
	linux-nfs@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	LKML <linux-kernel@...r.kernel.org>,
	Feng Tang <feng.tang@...el.com>
Subject: Re: [PATCH] nfs: writeback pages wait queue

Hi Jim,

On Sat, Nov 19, 2011 at 09:44:12PM +0800, Jim Rees wrote:
> Wu Fengguang wrote:
> 
>   The generic writeback routines are departing from congestion_wait()
>   in preference of get_request_wait(), aka. waiting on the block queues.
>   
>   Introduce the missing writeback wait queue for NFS, otherwise its
>   writeback pages will grow greedily, exhausting all PG_dirty pages.
>   
>   Tests show that it can effectively reduce stalls in the disk-network
>   pipeline, improve performance and reduce delays.
> 
> This is great stuff.  Did you do any tests on long delay paths?  I did some
> work on this a few years ago and made some progress but not enough.

Good question! I didn't test fat pipelines, which sure asks for reasonably high
nfs_congestion_kb to work well.

However we have good chances.

The nfs_congestion_kb is computed at module loading time in
nfs_init_writepagecache():

        /*
         * NFS congestion size, scale with available memory.
         *
         *  64MB:    8192k
         * 128MB:   11585k
         * 256MB:   16384k
         * 512MB:   23170k
         *   1GB:   32768k
         *   2GB:   46340k
         *   4GB:   65536k
         *   8GB:   92681k
         *  16GB:  131072k
         *
         * This allows larger machines to have larger/more transfers.
         * Limit the default to 256M
         */

For a typical mem=4GB client, nfs_congestion_kb=64MB, which is enough
to fill a 100ms*100MB/s=10MB network pipeline.

There may be more demanding ones, however that's rare case and its
user should be fully aware of the specialness and the need to do some
hand tuning, for example:

        echo $(300<<10) > /proc/sys/fs/nfs/nfs_congestion_kb

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ