[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190624092325.659180675@linuxfoundation.org>
Date: Mon, 24 Jun 2019 17:57:08 +0800
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, ShihPo Hung <shihpo.hung@...ive.com>,
Paul Walmsley <paul.walmsley@...ive.com>,
Palmer Dabbelt <palmer@...ive.com>,
Albert Ou <aou@...s.berkeley.edu>,
linux-riscv@...ts.infradead.org
Subject: [PATCH 5.1 096/121] riscv: mm: synchronize MMU after pte change
From: ShihPo Hung <shihpo.hung@...ive.com>
commit bf587caae305ae3b4393077fb22c98478ee55755 upstream.
Because RISC-V compliant implementations can cache invalid entries
in TLB, an SFENCE.VMA is necessary after changes to the page table.
This patch adds an SFENCE.vma for the vmalloc_fault path.
Signed-off-by: ShihPo Hung <shihpo.hung@...ive.com>
[paul.walmsley@...ive.com: reversed tab->whitespace conversion,
wrapped comment lines]
Signed-off-by: Paul Walmsley <paul.walmsley@...ive.com>
Cc: Palmer Dabbelt <palmer@...ive.com>
Cc: Albert Ou <aou@...s.berkeley.edu>
Cc: Paul Walmsley <paul.walmsley@...ive.com>
Cc: linux-riscv@...ts.infradead.org
Cc: stable@...r.kernel.org
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/riscv/mm/fault.c | 13 +++++++++++++
1 file changed, 13 insertions(+)
--- a/arch/riscv/mm/fault.c
+++ b/arch/riscv/mm/fault.c
@@ -29,6 +29,7 @@
#include <asm/pgalloc.h>
#include <asm/ptrace.h>
+#include <asm/tlbflush.h>
/*
* This routine handles page faults. It determines the address and the
@@ -281,6 +282,18 @@ vmalloc_fault:
pte_k = pte_offset_kernel(pmd_k, addr);
if (!pte_present(*pte_k))
goto no_context;
+
+ /*
+ * The kernel assumes that TLBs don't cache invalid
+ * entries, but in RISC-V, SFENCE.VMA specifies an
+ * ordering constraint, not a cache flush; it is
+ * necessary even after writing invalid entries.
+ * Relying on flush_tlb_fix_spurious_fault would
+ * suffice, but the extra traps reduce
+ * performance. So, eagerly SFENCE.VMA.
+ */
+ local_flush_tlb_page(addr);
+
return;
}
}
Powered by blists - more mailing lists