[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4526E229.2020707@RedHat.com>
Date: Fri, 06 Oct 2006 19:09:29 -0400
From: Steve Dickson <SteveD@...hat.com>
To: Andrew Morton <akpm@...l.org>
CC: Trond Myklebust <Trond.Myklebust@...app.com>,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH] VM: Fix the gfp_mask in invalidate_complete_page2
Andrew Morton wrote:
> On Fri, 06 Oct 2006 18:19:27 -0400
> Trond Myklebust <Trond.Myklebust@...app.com> wrote:
>
>
>>On Fri, 2006-10-06 at 18:16 -0400, Trond Myklebust wrote:
>>
>>>Yeah using mapping_gfp_mask(mapping) sounds like a better option.
>>
>>Revised patch is attached...
>
>
> Well, it wasn't attached, but I can simulate it.
>
> invalidate_complete_page() wants to be called from inside spinlocks by
> drop_pagecache(), so if we wanted to pull the same trick there we'd need to
> pass a new flag into invalidate_inode_pages().
That seems abit broken (wrt performance) that drop_pagecache_sb() holds
the fairly popular inode_lock while it invalidate pages...
Nobody else seem to...
>
> It's not 100% clear what the gfp_t _means_ in the try_to_release_page()
> context. Callees will rarely want to allocate memory (true?). So it
> conveys two concepts:
>
> a) can sleep. (__GFP_WAIT). That's fairly straightforward
>
> b) can take fs locks (__GFP_FS). This is less clear. By passing down
> __GFP_FS we're telling the callee that it's OK to take i_mutex, even
> lock_page(). That sounds pretty unsafe in this context, particularly
> the latter, as we're already holding a page lock.
>
> So perhaps the safer and more appropriate solution here is to pass in a
> bare __GFP_WAIT.
I agree... __GFP_WAIT does seem to be a bit more straightforward...
either way is find.. as long as it cause NFS to flush its pages...
steved.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists