[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C1780F2.7010003@redhat.com>
Date: Tue, 15 Jun 2010 09:32:34 -0400
From: Rik van Riel <riel@...hat.com>
To: Christoph Hellwig <hch@...radead.org>
CC: Dave Chinner <david@...morbit.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@....ul.ie>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>,
Johannes Weiner <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>
Subject: Re: [PATCH 11/12] vmscan: Write out dirty pages in batch
On 06/15/2010 07:01 AM, Christoph Hellwig wrote:
> On Mon, Jun 14, 2010 at 09:16:29PM -0400, Rik van Riel wrote:
>>> Besides, there really isn't the right context in the block layer to
>>> be able to queue and prioritise large amounts of IO without
>>> significant penalties to some higher layer operation.
>>
>> Can we kick flushing for the whole inode at once from
>> vmscan.c?
>
> kswapd really should be a last effort tool to clean filesystem pages.
> If it does enough I/O for this to matter significantly we need to
> fix the VM to move more work to the flusher threads instead of trying
> to fix kswapd.
>
>> Would it be hard to add a "please flush this file"
>> way to call the filesystem flushing threads?
>
> We already have that API, in Jens' latest tree that's
> sync_inodes_sb/writeback_inodes_sb. We could also add a non-waiting
> variant if required, but I think the big problem with kswapd is that
> we want to wait on I/O completion under circumstances.
However, kswapd does not need to wait on I/O completion of
any page in particular - it just wants to wait on I/O
completion of any inactive pages in the zone (or memcg)
where memory is being freed.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists