[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1497018069-17790-21-git-send-email-ldufour@linux.vnet.ibm.com>
Date: Fri, 9 Jun 2017 16:21:09 +0200
From: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
To: paulmck@...ux.vnet.ibm.com, peterz@...radead.org,
akpm@...ux-foundation.org, kirill@...temov.name,
ak@...ux.intel.com, mhocko@...nel.org, dave@...olabs.net,
jack@...e.cz, Matthew Wilcox <willy@...radead.org>
Cc: linux-kernel@...r.kernel.org, linux-mm@...ck.org,
haren@...ux.vnet.ibm.com, khandual@...ux.vnet.ibm.com,
npiggin@...il.com, bsingharora@...il.com
Subject: [RFC v4 20/20] mm/spf: Clear FAULT_FLAG_KILLABLE in the speculative path
The flag FAULT_FLAG_KILLABLE should be unset to not allow the mmap_sem
to released in __lock_page_or_retry().
In this patch the unsetting of the flag FAULT_FLAG_ALLOW_RETRY is also
moved into handle_speculative_fault() since this has to be done for
all architectures.
Signed-off-by: Laurent Dufour <ldufour@...ux.vnet.ibm.com>
---
arch/powerpc/mm/fault.c | 3 +--
arch/x86/mm/fault.c | 3 +--
mm/memory.c | 6 +++++-
3 files changed, 7 insertions(+), 5 deletions(-)
diff --git a/arch/powerpc/mm/fault.c b/arch/powerpc/mm/fault.c
index 6dd6a50f412f..4b6d0ed517ca 100644
--- a/arch/powerpc/mm/fault.c
+++ b/arch/powerpc/mm/fault.c
@@ -304,8 +304,7 @@ int do_page_fault(struct pt_regs *regs, unsigned long address,
if (is_write)
flags |= FAULT_FLAG_WRITE;
- fault = handle_speculative_fault(mm, address,
- flags & ~FAULT_FLAG_ALLOW_RETRY);
+ fault = handle_speculative_fault(mm, address, flags);
if (!(fault & VM_FAULT_RETRY || fault & VM_FAULT_ERROR))
goto done;
diff --git a/arch/x86/mm/fault.c b/arch/x86/mm/fault.c
index 02c0b884ca18..c62a7ea5e27b 100644
--- a/arch/x86/mm/fault.c
+++ b/arch/x86/mm/fault.c
@@ -1366,8 +1366,7 @@ __do_page_fault(struct pt_regs *regs, unsigned long error_code,
flags |= FAULT_FLAG_INSTRUCTION;
if (error_code & PF_USER) {
- fault = handle_speculative_fault(mm, address,
- flags & ~FAULT_FLAG_ALLOW_RETRY);
+ fault = handle_speculative_fault(mm, address, flags);
/*
* We also check against VM_FAULT_ERROR because we have to
diff --git a/mm/memory.c b/mm/memory.c
index 5b158549789b..35a311b0d314 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3945,7 +3945,6 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
{
struct vm_fault vmf = {
.address = address,
- .flags = flags | FAULT_FLAG_SPECULATIVE,
};
pgd_t *pgd;
p4d_t *p4d;
@@ -3954,6 +3953,10 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
int dead, seq, idx, ret = VM_FAULT_RETRY;
struct vm_area_struct *vma;
+ /* Clear flags that may lead to release the mmap_sem to retry */
+ flags &= ~(FAULT_FLAG_ALLOW_RETRY|FAULT_FLAG_KILLABLE);
+ flags |= FAULT_FLAG_SPECULATIVE;
+
idx = srcu_read_lock(&vma_srcu);
vma = find_vma_srcu(mm, address);
if (!vma)
@@ -4040,6 +4043,7 @@ int handle_speculative_fault(struct mm_struct *mm, unsigned long address,
vmf.pgoff = linear_page_index(vma, address);
vmf.gfp_mask = __get_fault_gfp_mask(vma);
vmf.sequence = seq;
+ vmf.flags = flags;
local_irq_enable();
--
2.7.4
Powered by blists - more mailing lists