lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 27 Feb 2020 20:26:46 -0800
From:   Matthew Wilcox <willy@...radead.org>
To:     Hugh Dickins <hughd@...gle.com>
Cc:     "Kirill A. Shutemov" <kirill@...temov.name>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Yang Shi <yang.shi@...ux.alibaba.com>,
        Alexander Duyck <alexander.duyck@...il.com>,
        "Michael S. Tsirkin" <mst@...hat.com>,
        David Hildenbrand <david@...hat.com>,
        "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
        Andrea Arcangeli <aarcange@...hat.com>, linux-mm@...ck.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH] huge tmpfs: try to split_huge_page() when punching hole

On Thu, Feb 27, 2020 at 08:04:21PM -0800, Hugh Dickins wrote:
> It's good to consider the implications for hole-punch on a persistent
> filesystem cached with THPs (or lower order compound pages); but I
> disagree that they should behave differently from this patch.
> 
> The hole-punch is fundamentally directed at freeing up the storage, yes;
> but its page cache must also be removed, otherwise you have the user
> writing into cache which is not backed by storage, and potentially losing
> the data later.  So a hole must be punched in the compound page in that
> case too: in fact, it's then much more important that split_huge_page()
> succeeds - not obvious what the fallback should be if it fails (perhaps
> in that case the compound page must be kept, but all its pmds removed,
> and info on holes kept in spare fields of the compound page, to prevent
> writes and write faults without calling back into the filesystem:
> soluble, but more work than tmpfs needs today)(and perhaps when that
> extra work is done, we would choose to rely on it rather than
> immediately splitting; but it will involve discounting the holes).

Ooh, a topic that reasonable people can disagree on!

The current prototype I have will allocate (huge) pages and then
ask the filesystem to fill them.  The filesystem may well find that
the extent is a hole, and if it is, it will fill the page with zeroes.
Then, the application may write to those pages, and if it does, the
filesystem will be notified to create an on-disk extent for that write.

I haven't looked at the hole-punch path in detail, but presumably it
notifies the filesystem to create a hole extent and zeroes out the
pagecache for that range (possibly by removing entire pages, and with
memset for partial pages).  Then a subsequent write to the hole will
cause the filesystem to allocate a new non-hole extent, just like the
previous case.

I think it's reasonable for the page cache to interpret a hole-punch
request as being a hint that the hole is unlikely to be accessed again,
so allocating new smaller pages for that region of the file (or just
writing back & dropping the covering page altogether) would seem like
a reasonable implementation decision.

However, it also seems reasonable that just memset() of the affected
region and leaving the page intact would also be an acceptable
implementation.  As long as writes to the newly-created hole cause the
page to become dirtied and thus writeback to be in effect.  It probably
wouldn't be as good an implementation, but it shouldn't lose writes as
you suggest above.

I'm not sure I'd choose to split a large page into smaller pages.  I think
I'd prefer to allocate lower-order pages and memcpy() the data over.
Again, that's an implementation choice, and not something that should
be visible outside the implementation.

[1] http://git.infradead.org/users/willy/linux-dax.git/shortlog/refs/heads/xarray-pagecache

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ