lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 31 Oct 2019 11:40:56 +0000
From:   Steven Whitehouse <swhiteho@...hat.com>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     Konstantin Khlebnikov <khlebnikov@...dex-team.ru>,
        "Kirill A. Shutemov" <kirill@...temov.name>,
        Linux-MM <linux-mm@...ck.org>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        linux-fsdevel <linux-fsdevel@...r.kernel.org>,
        Alexander Viro <viro@...iv.linux.org.uk>,
        Johannes Weiner <hannes@...xchg.org>,
        "cluster-devel@...hat.com" <cluster-devel@...hat.com>,
        Ronnie Sahlberg <lsahlber@...hat.com>,
        Steve French <sfrench@...ba.org>,
        Andreas Gruenbacher <agruenba@...hat.com>,
        Bob Peterson <rpeterso@...hat.com>
Subject: Re: [PATCH] mm/filemap: do not allocate cache pages beyond end of
 file at read

Hi,

On 30/10/2019 10:54, Linus Torvalds wrote:
> On Wed, Oct 30, 2019 at 11:35 AM Steven Whitehouse<swhiteho@...hat.com>  wrote:
>> NFS may be ok here, but it will break GFS2. There may be others too...
>> OCFS2 is likely one. Not sure about CIFS either. Does it really matter
>> that we might occasionally allocate a page and then free it again?
> Why are gfs2 and cifs doing things wrong?
For CIFS I've added Ronnie and Steve to common on that.
> "readpage()" is not for synchrionizing metadata. Never has been. You
> shouldn't treat it that way, and you shouldn't then make excuses for
> filesystems that treat it that way.
>
> Look at mmap, for example. It will do the SIGBUS handling before
> calling readpage(). Same goes for the copyfile code. A filesystem that
> thinks "I will update size at readpage" is already fundamentally
> buggy.
>
> We do _recheck_ the inode size under the page lock, but that's to
> handle the races with truncate etc.
>
>              Linus

For the GFS2 side of things, the algorithm looks like this:

  - Is there an uptodate page in cache?

    Yes, return it

    No, call into the fs readpage() to get one

This is designed so that for pages that are available in the page cache, 
we don't even need to call into the filesystem at all. It is all dealt 
with at the page cache level, unless the page doesn't exist. At this 
point we don't know what the i_size might be, and prior to the proposed 
patch, it simply doesn't matter, since we will ask the filesystem via 
->readpage() for all pages which are not in the cache.

If the page doesn't exist, we have to take the cluster level locks 
(glocks in the case of GFS2) which are potentially expensive, certainly 
a lot more expensive than the page lock anyway. That is currently done 
at the ->readpage() level, although we do have to drop the page lock 
first and then get the locks in the correct order, since the lock 
ordering requires the glock to be taken in shared mode ahead of the page 
lock.

We've always in the past been able to just use the generic code, since 
it was written to not assume i_size was valid outside of the fs specific 
locks. The aim has always been to try and use generic code as much as 
possible, even though there are some cases where we've had to depart 
from that for various reasons.

It appears that the filemap_fault issue seems to have not been spotted 
before. I'm not quite sure how that was missed - seems to show that we 
have some missing tests, but I agree that it does need to be fixed. It 
is a while since I last looked at that particular bit of code in detail, 
so my memory may be a bit fuzzy.

Andreas, Bob, have I missed anything here?

Steve.



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ