[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f0140b13-cca2-af9e-eb4b-82eda134eb8f@redhat.com>
Date:   Wed, 30 Oct 2019 10:34:22 +0000
From:   Steven Whitehouse <swhiteho@...hat.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>,
        Konstantin Khlebnikov <khlebnikov@...dex-team.ru>
Cc:     "Kirill A. Shutemov" <kirill@...temov.name>,
        Linux-MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Johannes Weiner <hannes@...xchg.org>,
        "cluster-devel@...hat.com" <cluster-devel@...hat.com>
Subject: Re: [PATCH] mm/filemap: do not allocate cache pages beyond end of
 file at read
Hi,
On 29/10/2019 16:52, Linus Torvalds wrote:
> On Tue, Oct 29, 2019 at 3:25 PM Konstantin Khlebnikov
> <khlebnikov@...dex-team.ru> wrote:
>> I think all network filesystems which synchronize metadata lazily should be
>> marked. For example as "SB_VOLATILE". And vfs could handle them specially.
> No need. The VFS layer doesn't call generic_file_buffered_read()
> directly anyway. It's just a helper function for filesystems to use if
> they want to.
>
> They could (and should) make sure the inode size is sufficiently
> up-to-date before calling it. And if they want something more
> synchronous, they can do it themselves.
>
> But NFS, for example, has open/close consistency, so the metadata
> revalidation is at open() time, not at read time.
>
>                 Linus
NFS may be ok here, but it will break GFS2. There may be others too... 
OCFS2 is likely one. Not sure about CIFS either. Does it really matter 
that we might occasionally allocate a page and then free it again?
Ramfs is a simple test case, but at the same time it doesn't represent 
the complexity of a real world filesystem. I'm just back from a few days 
holiday so apologies if I've missed something earlier on in the discussions,
Steve.
Powered by blists - more mailing lists