[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4C19030A.4070406@redhat.com>
Date: Wed, 16 Jun 2010 12:59:54 -0400
From: Rik van Riel <riel@...hat.com>
To: Nick Piggin <npiggin@...e.de>
CC: Christoph Hellwig <hch@...radead.org>,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mel@....ul.ie>, linux-kernel@...r.kernel.org,
linux-fsdevel@...r.kernel.org, linux-mm@...ck.org,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>
Subject: Re: [RFC PATCH 0/6] Do not call ->writepage[s] from direct reclaim
and use a_ops->writepages() where possible
On 06/16/2010 03:57 AM, Nick Piggin wrote:
> On Tue, Jun 15, 2010 at 03:13:09PM -0400, Rik van Riel wrote:
>> On 06/15/2010 12:54 PM, Christoph Hellwig wrote:
>>> On Tue, Jun 15, 2010 at 12:49:49PM -0400, Rik van Riel wrote:
>>>> This is already in a filesystem. Why does ->writepage get
>>>> called a second time? Shouldn't this have a gfp_mask
>>>> without __GFP_FS set?
>>>
>>> Why would it? GFP_NOFS is not for all filesystem code, but only for
>>> code where we can't re-enter the filesystem due to deadlock potential.
>>
>> Why? How about because you know the stack is not big enough
>> to have the XFS call path on it twice? :)
>>
>> Isn't the whole purpose of this patch series to prevent writepage
>> from being called by the VM, when invoked from a deep callstack
>> like xfs writepage?
>>
>> That sounds a lot like simply wanting to not have GFP_FS...
>
> buffered write path uses __GFP_FS by design because huge amounts
> of (dirty) memory can be allocated in doing pagecache writes. If
> would be nasty if that was not allowed to wait for filesystem
> activity.
__GFP_IO can wait for filesystem activity
__GFP_FS can kick off new filesystem activity
At least, that's how I remember it from when I last looked
at that code in detail. Things may have changed subtly.
--
All rights reversed
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists