[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20230119212317.8324-26-rick.p.edgecombe@intel.com>
Date: Thu, 19 Jan 2023 13:23:03 -0800
From: Rick Edgecombe <rick.p.edgecombe@...el.com>
To: x86@...nel.org, "H . Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-api@...r.kernel.org,
Arnd Bergmann <arnd@...db.de>,
Andy Lutomirski <luto@...nel.org>,
Balbir Singh <bsingharora@...il.com>,
Borislav Petkov <bp@...en8.de>,
Cyrill Gorcunov <gorcunov@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Eugene Syromiatnikov <esyr@...hat.com>,
Florian Weimer <fweimer@...hat.com>,
"H . J . Lu" <hjl.tools@...il.com>, Jann Horn <jannh@...gle.com>,
Jonathan Corbet <corbet@....net>,
Kees Cook <keescook@...omium.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadav Amit <nadav.amit@...il.com>,
Oleg Nesterov <oleg@...hat.com>, Pavel Machek <pavel@....cz>,
Peter Zijlstra <peterz@...radead.org>,
Randy Dunlap <rdunlap@...radead.org>,
Weijiang Yang <weijiang.yang@...el.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>,
John Allen <john.allen@....com>, kcc@...gle.com,
eranian@...gle.com, rppt@...nel.org, jamorris@...ux.microsoft.com,
dethoma@...rosoft.com, akpm@...ux-foundation.org,
Andrew.Cooper3@...rix.com, christina.schimpe@...el.com
Cc: rick.p.edgecombe@...el.com
Subject: [PATCH v5 25/39] mm: Warn on shadow stack memory in wrong vma
The x86 Control-flow Enforcement Technology (CET) feature includes a new
type of memory called shadow stack. This shadow stack memory has some
unusual properties, which requires some core mm changes to function
properly.
One sharp edge is that PTEs that are both Write=0 and Dirty=1 are
treated as shadow by the CPU, but this combination used to be created by
the kernel on x86. Previous patches have changed the kernel to now avoid
creating these PTEs unless they are for shadow stack memory. In case any
missed corners of the kernel are still creating PTEs like this for
non-shadow stack memory, and to catch any re-introductions of the logic,
warn if any shadow stack PTEs (Write=0, Dirty=1) are found in non-shadow
stack VMAs when they are being zapped. This won't catch transient cases
but should have decent coverage. It will be compiled out when shadow
stack is not configured.
In order to check if a pte is shadow stack in core mm code, add default
implementations for pte_shstk() and pmd_shstk().
Tested-by: Pengfei Xu <pengfei.xu@...el.com>
Tested-by: John Allen <john.allen@....com>
Signed-off-by: Rick Edgecombe <rick.p.edgecombe@...el.com>
---
v5:
- Fix typo in commit log
v3:
- New patch
arch/x86/include/asm/pgtable.h | 2 ++
include/linux/pgtable.h | 14 ++++++++++++++
mm/huge_memory.c | 2 ++
mm/memory.c | 2 ++
4 files changed, 20 insertions(+)
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index 425ded5dd6ec..356f1d43e403 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -129,6 +129,7 @@ static inline bool pte_dirty(pte_t pte)
return pte_flags(pte) & _PAGE_DIRTY_BITS;
}
+#define pte_shstk pte_shstk
static inline bool pte_shstk(pte_t pte)
{
if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
@@ -147,6 +148,7 @@ static inline bool pmd_dirty(pmd_t pmd)
return pmd_flags(pmd) & _PAGE_DIRTY_BITS;
}
+#define pmd_shstk pmd_shstk
static inline bool pmd_shstk(pmd_t pmd)
{
if (!cpu_feature_enabled(X86_FEATURE_USER_SHSTK))
diff --git a/include/linux/pgtable.h b/include/linux/pgtable.h
index 49ce1f055242..04d0bc466e43 100644
--- a/include/linux/pgtable.h
+++ b/include/linux/pgtable.h
@@ -539,6 +539,20 @@ static inline pte_t pte_mkwrite_shstk(pte_t pte)
}
#endif
+#ifndef pte_shstk
+static inline bool pte_shstk(pte_t pte)
+{
+ return false;
+}
+#endif
+
+#ifndef pmd_shstk
+static inline bool pmd_shstk(pmd_t pte)
+{
+ return false;
+}
+#endif
+
#ifndef pmd_mkwrite_shstk
static inline pmd_t pmd_mkwrite_shstk(pmd_t pmd)
{
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index fbb8beb9265e..5bd71da75dec 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1700,6 +1700,8 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
*/
orig_pmd = pmdp_huge_get_and_clear_full(vma, addr, pmd,
tlb->fullmm);
+ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
+ pmd_shstk(orig_pmd));
tlb_remove_pmd_tlb_entry(tlb, pmd, addr);
if (vma_is_special_huge(vma)) {
if (arch_needs_pgtable_deposit())
diff --git a/mm/memory.c b/mm/memory.c
index 5e5107232a26..c4cc38baffc5 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -1381,6 +1381,8 @@ static unsigned long zap_pte_range(struct mmu_gather *tlb,
continue;
ptent = ptep_get_and_clear_full(mm, addr, pte,
tlb->fullmm);
+ VM_WARN_ON_ONCE(!(vma->vm_flags & VM_SHADOW_STACK) &&
+ pte_shstk(ptent));
tlb_remove_tlb_entry(tlb, pte, addr);
zap_install_uffd_wp_if_needed(vma, addr, pte, details,
ptent);
--
2.17.1
Powered by blists - more mailing lists