[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190320020642.4000-26-peterx@redhat.com>
Date: Wed, 20 Mar 2019 10:06:39 +0800
From: Peter Xu <peterx@...hat.com>
To: linux-mm@...ck.org, linux-kernel@...r.kernel.org
Cc: David Hildenbrand <david@...hat.com>,
Hugh Dickins <hughd@...gle.com>,
Maya Gokhale <gokhale2@...l.gov>,
Jerome Glisse <jglisse@...hat.com>,
Pavel Emelyanov <xemul@...tuozzo.com>,
Johannes Weiner <hannes@...xchg.org>, peterx@...hat.com,
Martin Cracauer <cracauer@...s.org>, Shaohua Li <shli@...com>,
Andrea Arcangeli <aarcange@...hat.com>,
Mike Kravetz <mike.kravetz@...cle.com>,
Denis Plotnikov <dplotnikov@...tuozzo.com>,
Mike Rapoport <rppt@...ux.vnet.ibm.com>,
Marty McFadden <mcfadden8@...l.gov>,
Mel Gorman <mgorman@...e.de>,
"Kirill A . Shutemov" <kirill@...temov.name>,
"Dr . David Alan Gilbert" <dgilbert@...hat.com>
Subject: [PATCH v3 25/28] userfaultfd: wp: fixup swap entries in change_pte_range
In change_pte_range() we do nothing for uffd if the PTE is a swap
entry. That can lead to data mismatch if the page that we are going
to write protect is swapped out when sending the UFFDIO_WRITEPROTECT.
This patch applies/removes the uffd-wp bit even for the swap entries.
Signed-off-by: Peter Xu <peterx@...hat.com>
---
I kept this patch a standalone one majorly to make review easier. The
patch can be considered as standalone or to squash into the patch
"userfaultfd: wp: support swap and page migration".
---
mm/mprotect.c | 24 +++++++++++++-----------
1 file changed, 13 insertions(+), 11 deletions(-)
diff --git a/mm/mprotect.c b/mm/mprotect.c
index 96c0f521099d..a23e03053787 100644
--- a/mm/mprotect.c
+++ b/mm/mprotect.c
@@ -183,11 +183,11 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
}
ptep_modify_prot_commit(mm, addr, pte, ptent);
pages++;
- } else if (IS_ENABLED(CONFIG_MIGRATION)) {
+ } else if (is_swap_pte(oldpte)) {
swp_entry_t entry = pte_to_swp_entry(oldpte);
+ pte_t newpte;
if (is_write_migration_entry(entry)) {
- pte_t newpte;
/*
* A protection check is difficult so
* just be safe and disable write
@@ -198,22 +198,24 @@ static unsigned long change_pte_range(struct vm_area_struct *vma, pmd_t *pmd,
newpte = pte_swp_mksoft_dirty(newpte);
if (pte_swp_uffd_wp(oldpte))
newpte = pte_swp_mkuffd_wp(newpte);
- set_pte_at(mm, addr, pte, newpte);
-
- pages++;
- }
-
- if (is_write_device_private_entry(entry)) {
- pte_t newpte;
-
+ } else if (is_write_device_private_entry(entry)) {
/*
* We do not preserve soft-dirtiness. See
* copy_one_pte() for explanation.
*/
make_device_private_entry_read(&entry);
newpte = swp_entry_to_pte(entry);
- set_pte_at(mm, addr, pte, newpte);
+ } else {
+ newpte = oldpte;
+ }
+ if (uffd_wp)
+ newpte = pte_swp_mkuffd_wp(newpte);
+ else if (uffd_wp_resolve)
+ newpte = pte_swp_clear_uffd_wp(newpte);
+
+ if (!pte_same(oldpte, newpte)) {
+ set_pte_at(mm, addr, pte, newpte);
pages++;
}
}
--
2.17.1
Powered by blists - more mailing lists