[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221222205511.675832-3-david@redhat.com>
Date: Thu, 22 Dec 2022 21:55:11 +0100
From: David Hildenbrand <david@...hat.com>
To: linux-kernel@...r.kernel.org
Cc: linux-mm@...ck.org, David Hildenbrand <david@...hat.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Peter Xu <peterx@...hat.com>,
Muchun Song <muchun.song@...ux.dev>,
Miaohe Lin <linmiaohe@...wei.com>, stable@...r.kernel.org
Subject: [PATCH v1 2/2] mm/hugetlb: fix uffd-wp handling for migration entries in hugetlb_change_protection()
We have to update the uffd-wp SWP PTE bit independent of the type of
migration entry. Currently, if we're unlucky and we want to install/clear
the uffd-wp bit just while we're migrating a read-only mapped hugetlb page,
we would miss to set/clear the uffd-wp bit.
Further, if we're processing a readable-exclusive
migration entry and neither want to set or clear the uffd-wp bit, we
could currently end up losing the uffd-wp bit. Note that the same would
hold for writable migrating entries, however, having a writable
migration entry with the uffd-wp bit set would already mean that
something went wrong.
Note that the change from !is_readable_migration_entry ->
writable_migration_entry is harmless and actually cleaner, as raised by
Miaohe Lin and discussed in [1].
[1] https://lkml.kernel.org/r/90dd6a93-4500-e0de-2bf0-bf522c311b0c@huawei.com
Fixes: 60dfaad65aa9 ("mm/hugetlb: allow uffd wr-protect none ptes")
Cc: <stable@...r.kernel.org>
Signed-off-by: David Hildenbrand <david@...hat.com>
---
mm/hugetlb.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 3a94f519304f..9552a6d1a281 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -6516,10 +6516,9 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
} else if (unlikely(is_hugetlb_entry_migration(pte))) {
swp_entry_t entry = pte_to_swp_entry(pte);
struct page *page = pfn_swap_entry_to_page(entry);
+ pte_t newpte = pte;
- if (!is_readable_migration_entry(entry)) {
- pte_t newpte;
-
+ if (is_writable_migration_entry(entry)) {
if (PageAnon(page))
entry = make_readable_exclusive_migration_entry(
swp_offset(entry));
@@ -6527,13 +6526,15 @@ unsigned long hugetlb_change_protection(struct vm_area_struct *vma,
entry = make_readable_migration_entry(
swp_offset(entry));
newpte = swp_entry_to_pte(entry);
- if (uffd_wp)
- newpte = pte_swp_mkuffd_wp(newpte);
- else if (uffd_wp_resolve)
- newpte = pte_swp_clear_uffd_wp(newpte);
- set_huge_pte_at(mm, address, ptep, newpte);
pages++;
}
+
+ if (uffd_wp)
+ newpte = pte_swp_mkuffd_wp(newpte);
+ else if (uffd_wp_resolve)
+ newpte = pte_swp_clear_uffd_wp(newpte);
+ if (!pte_same(pte, newpte))
+ set_huge_pte_at(mm, address, ptep, newpte);
} else if (unlikely(is_pte_marker(pte))) {
/* No other markers apply for now. */
WARN_ON_ONCE(!pte_marker_uffd_wp(pte));
--
2.38.1
Powered by blists - more mailing lists