[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <4a7b373b438d5b93999349c648189bd5ba8f9198.1550088114.git.khalid.aziz@oracle.com>
Date: Wed, 13 Feb 2019 17:01:28 -0700
From: Khalid Aziz <khalid.aziz@...cle.com>
To: juergh@...il.com, tycho@...ho.ws, jsteckli@...zon.de,
ak@...ux.intel.com, torvalds@...ux-foundation.org,
liran.alon@...cle.com, keescook@...gle.com,
akpm@...ux-foundation.org, mhocko@...e.com,
catalin.marinas@....com, will.deacon@....com, jmorris@...ei.org,
konrad.wilk@...cle.com
Cc: Juerg Haefliger <juerg.haefliger@...onical.com>,
deepa.srinivasan@...cle.com, chris.hyser@...cle.com,
tyhicks@...onical.com, dwmw@...zon.co.uk,
andrew.cooper3@...rix.com, jcm@...hat.com,
boris.ostrovsky@...cle.com, kanth.ghatraju@...cle.com,
oao.m.martins@...cle.com, jmattson@...gle.com,
pradeep.vincent@...cle.com, john.haxby@...cle.com,
tglx@...utronix.de, kirill.shutemov@...ux.intel.com, hch@....de,
steven.sistare@...cle.com, labbott@...hat.com, luto@...nel.org,
dave.hansen@...el.com, peterz@...radead.org,
kernel-hardening@...ts.openwall.com, linux-mm@...ck.org,
x86@...nel.org, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Tycho Andersen <tycho@...ker.com>,
Khalid Aziz <khalid.aziz@...cle.com>
Subject: [RFC PATCH v8 05/14] arm64/mm: Add support for XPFO
From: Juerg Haefliger <juerg.haefliger@...onical.com>
Enable support for eXclusive Page Frame Ownership (XPFO) for arm64 and
provide a hook for updating a single kernel page table entry (which is
required by the generic XPFO code).
CC: linux-arm-kernel@...ts.infradead.org
Signed-off-by: Juerg Haefliger <juerg.haefliger@...onical.com>
Signed-off-by: Tycho Andersen <tycho@...ker.com>
Signed-off-by: Khalid Aziz <khalid.aziz@...cle.com>
---
v6:
- use flush_tlb_kernel_range() instead of __flush_tlb_one()
v8:
- Add check for NULL pte in set_kpte()
arch/arm64/Kconfig | 1 +
arch/arm64/mm/Makefile | 2 ++
arch/arm64/mm/xpfo.c | 64 ++++++++++++++++++++++++++++++++++++++++++
3 files changed, 67 insertions(+)
create mode 100644 arch/arm64/mm/xpfo.c
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index ea2ab0330e3a..f0a9c0007d23 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -171,6 +171,7 @@ config ARM64
select SWIOTLB
select SYSCTL_EXCEPTION_TRACE
select THREAD_INFO_IN_TASK
+ select ARCH_SUPPORTS_XPFO
help
ARM 64-bit (AArch64) Linux support.
diff --git a/arch/arm64/mm/Makefile b/arch/arm64/mm/Makefile
index 849c1df3d214..cca3808d9776 100644
--- a/arch/arm64/mm/Makefile
+++ b/arch/arm64/mm/Makefile
@@ -12,3 +12,5 @@ KASAN_SANITIZE_physaddr.o += n
obj-$(CONFIG_KASAN) += kasan_init.o
KASAN_SANITIZE_kasan_init.o := n
+
+obj-$(CONFIG_XPFO) += xpfo.o
diff --git a/arch/arm64/mm/xpfo.c b/arch/arm64/mm/xpfo.c
new file mode 100644
index 000000000000..1f790f7746ad
--- /dev/null
+++ b/arch/arm64/mm/xpfo.c
@@ -0,0 +1,64 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Copyright (C) 2017 Hewlett Packard Enterprise Development, L.P.
+ * Copyright (C) 2016 Brown University. All rights reserved.
+ *
+ * Authors:
+ * Juerg Haefliger <juerg.haefliger@....com>
+ * Vasileios P. Kemerlis <vpk@...brown.edu>
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms of the GNU General Public License version 2 as published by
+ * the Free Software Foundation.
+ */
+
+#include <linux/mm.h>
+#include <linux/module.h>
+
+#include <asm/tlbflush.h>
+
+/*
+ * Lookup the page table entry for a virtual address and return a pointer to
+ * the entry. Based on x86 tree.
+ */
+static pte_t *lookup_address(unsigned long addr)
+{
+ pgd_t *pgd;
+ pud_t *pud;
+ pmd_t *pmd;
+
+ pgd = pgd_offset_k(addr);
+ if (pgd_none(*pgd))
+ return NULL;
+
+ pud = pud_offset(pgd, addr);
+ if (pud_none(*pud))
+ return NULL;
+
+ pmd = pmd_offset(pud, addr);
+ if (pmd_none(*pmd))
+ return NULL;
+
+ return pte_offset_kernel(pmd, addr);
+}
+
+/* Update a single kernel page table entry */
+inline void set_kpte(void *kaddr, struct page *page, pgprot_t prot)
+{
+ pte_t *pte = lookup_address((unsigned long)kaddr);
+
+ if (unlikely(!pte)) {
+ WARN(1, "xpfo: invalid address %p\n", kaddr);
+ return;
+ }
+
+ set_pte(pte, pfn_pte(page_to_pfn(page), prot));
+}
+
+inline void xpfo_flush_kernel_tlb(struct page *page, int order)
+{
+ unsigned long kaddr = (unsigned long)page_address(page);
+ unsigned long size = PAGE_SIZE;
+
+ flush_tlb_kernel_range(kaddr, kaddr + (1 << order) * size);
+}
--
2.17.1
Powered by blists - more mailing lists