[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220304051708.86193-10-peterx@redhat.com>
Date: Fri, 4 Mar 2022 13:16:54 +0800
From: Peter Xu <peterx@...hat.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: peterx@...hat.com, Nadav Amit <nadav.amit@...il.com>,
Hugh Dickins <hughd@...gle.com>,
David Hildenbrand <david@...hat.com>,
Axel Rasmussen <axelrasmussen@...gle.com>,
Matthew Wilcox <willy@...radead.org>,
Alistair Popple <apopple@...dia.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jerome Glisse <jglisse@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
"Kirill A . Shutemov" <kirill@...temov.name>,
Andrea Arcangeli <aarcange@...hat.com>
Subject: [PATCH v7 09/23] mm/shmem: Allows file-back mem to be uffd wr-protected on thps
We don't have "huge" version of pte markers, instead when necessary we split
the thp.
However split the thp is not enough, because file-backed thp is handled totally
differently comparing to anonymous thps: rather than doing a real split, the
thp pmd will simply got cleared in __split_huge_pmd_locked().
That is not enough if e.g. when there is a thp covers range [0, 2M) but we want
to wr-protect small page resides in [4K, 8K) range, because after
__split_huge_pmd() returns, there will be a none pmd, and change_pmd_range()
will just skip it right after the split.
Here we leverage the previously introduced change_pmd_prepare() macro so that
we'll populate the pmd with a pgtable page after the pmd split (in which
process the pmd will be cleared for cases like shmem). Then change_pte_range()
will do all the rest for us by installing the uffd-wp pte marker at any none
pte that we'd like to wr-protect.
Signed-off-by: Peter Xu <peterx@...hat.com>
---
mm/mprotect.c | 9 ++++++++-
1 file changed, 8 insertions(+), 1 deletion(-)
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 4878b6b99df9..95b307d4766d 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -329,8 +329,15 @@ static inline unsigned long change_pmd_range(struct vm_area_struct *vma,
}
if (is_swap_pmd(*pmd) || pmd_trans_huge(*pmd) || pmd_devmap(*pmd)) {
- if (next - addr != HPAGE_PMD_SIZE) {
+ if ((next - addr != HPAGE_PMD_SIZE) ||
+ uffd_wp_protect_file(vma, cp_flags)) {
__split_huge_pmd(vma, pmd, addr, false, NULL);
+ /*
+ * For file-backed, the pmd could have been
+ * cleared; make sure pmd populated if
+ * necessary, then fall-through to pte level.
+ */
+ change_pmd_prepare(vma, pmd, cp_flags);
} else {
int nr_ptes = change_huge_pmd(vma, pmd, addr,
newprot, cp_flags);
--
2.32.0
Powered by blists - more mailing lists