[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091008014447.GA14224@localhost>
Date: Thu, 8 Oct 2009 09:44:47 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Peter Staubach <staubach@...hat.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Theodore Tso <tytso@....edu>,
Christoph Hellwig <hch@...radead.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
"Li, Shaohua" <shaohua.li@...el.com>,
Myklebust Trond <Trond.Myklebust@...app.com>,
"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
Jan Kara <jack@...e.cz>, Nick Piggin <npiggin@...e.de>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 44/45] NFS: remove NFS_INO_FLUSHING lock
On Wed, Oct 07, 2009 at 09:59:10PM +0800, Peter Staubach wrote:
> Wu Fengguang wrote:
> > On Wed, Oct 07, 2009 at 09:11:15PM +0800, Peter Staubach wrote:
> >> Wu Fengguang wrote:
> >>> It was introduced in 72cb77f4a5ac, and the several issues have been
> >>> addressed in generic writeback:
> >>> - out of order writeback (or interleaved concurrent writeback)
> >>> addressed by the per-bdi writeback and wait queue in balance_dirty_pages()
> >>> - sync livelocked by a fast dirtier
> >>> addressed by throttling all to-be-synced dirty inodes
> >>>
> >> I don't think that we can just remove this support. It is
> >> designed to reduce the effects from doing a stat(2) on a
> >> file which is being actively written to.
> >
> > Ah OK.
> >
> >> If we do remove it, then we will need to replace this patch
> >> with another. Trond and I hadn't quite finished discussing
> >> some aspects of that other patch... :-)
> >
> > I noticed the i_mutex lock in nfs_getattr(). Do you mean that?
> >
>
> Well, that's part of that support as well. That keeps a writing
> application from dirtying more pages while the application doing
> the stat is attempting to clean them.
Instead of blocking totally, we could throttle application writes.
The following two patches are the more gentle way of doing this,
however it does not guarantee to kill the livelock, since a busy
bdi-flush thread could writeback many pages to unfreeze the
application prematurely. Anyway I attach them as a demo of idea,
whether it be good or bad.
> Another approach that I suggested was to keep track of the
> number of pages which are dirty on a per-inode basis. When
Yes a per-inode dirty count should be trivial and may be good for others.
> enough pages are dirty to fill an over the wire transfer,
> then schedule an asynchronous write to transmit that data to
This should also be trivial to support if the location ordered
writeback infrastructure is ready.
> the server. This ties in with support to ensure that the
> server/network is not completely overwhelmed by the client
> by flow controlling the writing application to better match
> the bandwidth and latencies of the network and server.
I like that feature :)
> With this support, the NFS client tends not to fill memory
> with dirty pages and thus, does not depend upon the other
> parts of the system to flush these pages.
>
> All of these recent pages make this current flushing happen
> in a much more orderly fashion, which is great. However,
Thanks.
> this can still lead to the client attempting to flush
> potentially gigabytes all at once, which is more than most
> networks and servers can handle reasonably.
OK, I now see the need to keep mapping->nr_dirty under control: it
could make many NFS operations response in a more bounded fashion.
The good thing is, it can share infrastructure with the location
based writeback (http://lkml.org/lkml/2007/8/27/45 :)
Thanks,
Fengguang
View attachment "writeback-throttle-sync-mapping.patch" of type "text/x-diff" (2488 bytes)
View attachment "nfs-no-i_mutex-for-livelock-prevention.patch" of type "text/x-diff" (1497 bytes)
Powered by blists - more mailing lists