[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20201201223033.GG3277@xz-x1>
Date: Tue, 1 Dec 2020 17:30:33 -0500
From: Peter Xu <peterx@...hat.com>
To: Matthew Wilcox <willy@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
Andrew Morton <akpm@...ux-foundation.org>,
Hugh Dickins <hughd@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
David Hildenbrand <david@...hat.com>
Subject: Re: [PATCH v2] mm: Don't fault around userfaultfd-registered regions
on reads
On Tue, Dec 01, 2020 at 12:59:27PM +0000, Matthew Wilcox wrote:
> On Mon, Nov 30, 2020 at 06:06:03PM -0500, Peter Xu wrote:
> > Faulting around for reads are in most cases helpful for the performance so that
> > continuous memory accesses may avoid another trip of page fault. However it
> > may not always work as expected.
> >
> > For example, userfaultfd registered regions may not be the best candidate for
> > pre-faults around the reads.
> >
> > For missing mode uffds, fault around does not help because if the page cache
> > existed, then the page should be there already. If the page cache is not
> > there, nothing else we can do, either. If the fault-around code is destined to
> > be helpless for userfault-missing vmas, then ideally we can skip it.
>
> This sounds like you're thinking of a file which has exactly one user.
> If there are multiple processes mapping the same file, then no, there's
> no reason to expect a page to be already present in the page table,
> just because it's present in the page cache.
>
> > For wr-protected mode uffds, errornously fault in those pages around could lead
> > to threads accessing the pages without uffd server's awareness. For example,
> > when punching holes on uffd-wp registered shmem regions, we'll first try to
> > unmap all the pages before evicting the page cache but without locking the
> > page (please refer to shmem_fallocate(), where unmap_mapping_range() is called
> > before shmem_truncate_range()). When fault-around happens near a hole being
> > punched, we might errornously fault in the "holes" right before it will be
> > punched. Then there's a small window before the page cache was finally
> > dropped, and after the page will be writable again (NOTE: the uffd-wp protect
> > information is totally lost due to the pre-unmap in shmem_fallocate(), so the
> > page can be writable within the small window). That's severe data loss.
>
> This still doesn't make sense. If the page is Uptodate in the page
> cache, then userspace gets to access it. If you don't want the page to
> be accessible, ClearPageUptodate(). read() can also access it if it's
> marked Uptodate. A write fault on a page will call the filesystem's
> page_mkwrite() and you can block it there.
I still don't think the page_mkwrite() could help here... Though Andrea pointed
out an more important issue against swap cache (in the v1 thread [1]). Indeed
if we have those figured out maybe we'll also rethink this patch then it could
become optional; while that seems to be required to allow shmem swap in/out
with uffd-wp which I haven't yet tested. As Hugh pointed out, purely reuse the
_PAGE_SWP_UFFD_WP in swap cache may not work trivially since uffd-wp is per-pte
rather than per-page, so I probably need to think a bit more on how to do
that...
I don't know whether a patch like this could still be good in the future. For
now, let's drop this patch until we solve all the rest of the puzzle.
My thanks to all the reviewers, and sorry for the noise!
NAK myself.
[1] https://lore.kernel.org/lkml/alpine.LSU.2.11.2012011250070.1582@eggly.anvils/T/#mef0716b38f4f5fc07b7542f2c11a07535ea31aad
--
Peter Xu
Powered by blists - more mailing lists