[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1515777968-867-4-git-send-email-ldufour@linux.vnet.ibm.com>
Date: Fri, 12 Jan 2018 18:25:47 +0100
From: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
To: paulmck@...ux.vnet.ibm.com, peterz@...radead.org,
akpm@...ux-foundation.org, kirill@...temov.name,
ak@...ux.intel.com, mhocko@...nel.org, dave@...olabs.net,
jack@...e.cz, Matthew Wilcox <willy@...radead.org>,
benh@...nel.crashing.org, mpe@...erman.id.au, paulus@...ba.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, hpa@...or.com,
Will Deacon <will.deacon@....com>,
Sergey Senozhatsky <sergey.senozhatsky@...il.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Alexei Starovoitov <alexei.starovoitov@...il.com>,
kemi.wang@...el.com, sergey.senozhatsky.work@...il.com
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
haren@...ux.vnet.ibm.com, khandual@...ux.vnet.ibm.com,
npiggin@...il.com, bsingharora@...il.com,
Tim Chen <tim.c.chen@...ux.intel.com>,
linuxppc-dev@...ts.ozlabs.org, x86@...nel.org
Subject: [PATCH v6 03/24] mm: Dont assume page-table invariance during faults
From: Peter Zijlstra <peterz@...radead.org>
One of the side effects of speculating on faults (without holding
mmap_sem) is that we can race with free_pgtables() and therefore we
cannot assume the page-tables will stick around.
Remove the reliance on the pte pointer.
Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org>
[Remove only if !CONFIG_SPF]
Signed-off-by: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
---
mm/memory.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/mm/memory.c b/mm/memory.c
index 8a80986fff48..259f621345b2 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -2274,6 +2274,7 @@ int apply_to_page_range(struct mm_struct *mm, unsigned long addr,
}
EXPORT_SYMBOL_GPL(apply_to_page_range);
+#ifndef CONFIG_SPF
/*
* handle_pte_fault chooses page fault handler according to an entry which was
* read non-atomically. Before making any commitment, on those architectures
@@ -2297,6 +2298,7 @@ static inline int pte_unmap_same(struct mm_struct *mm, pmd_t *pmd,
pte_unmap(page_table);
return same;
}
+#endif /* CONFIG_SPF */
static inline void cow_user_page(struct page *dst, struct page *src, unsigned long va, struct vm_area_struct *vma)
{
@@ -2884,11 +2886,13 @@ int do_swap_page(struct vm_fault *vmf)
swapcache = page;
}
+#ifndef CONFIG_SPF
if (!pte_unmap_same(vma->vm_mm, vmf->pmd, vmf->pte, vmf->orig_pte)) {
if (page)
put_page(page);
goto out;
}
+#endif
entry = pte_to_swp_entry(vmf->orig_pte);
if (unlikely(non_swap_entry(entry))) {
--
2.7.4
Powered by blists - more mailing lists