[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YSdwH1T7g5B3E9ZH@zn.tnic>
Date: Thu, 26 Aug 2021 12:42:39 +0200
From: Borislav Petkov <bp@...en8.de>
To: Yu-cheng Yu <yu-cheng.yu@...el.com>
Cc: x86@...nel.org, "H. Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, linux-kernel@...r.kernel.org,
linux-doc@...r.kernel.org, linux-mm@...ck.org,
linux-arch@...r.kernel.org, linux-api@...r.kernel.org,
Arnd Bergmann <arnd@...db.de>,
Andy Lutomirski <luto@...nel.org>,
Balbir Singh <bsingharora@...il.com>,
Cyrill Gorcunov <gorcunov@...il.com>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Eugene Syromiatnikov <esyr@...hat.com>,
Florian Weimer <fweimer@...hat.com>,
"H.J. Lu" <hjl.tools@...il.com>, Jann Horn <jannh@...gle.com>,
Jonathan Corbet <corbet@....net>,
Kees Cook <keescook@...omium.org>,
Mike Kravetz <mike.kravetz@...cle.com>,
Nadav Amit <nadav.amit@...il.com>,
Oleg Nesterov <oleg@...hat.com>, Pavel Machek <pavel@....cz>,
Peter Zijlstra <peterz@...radead.org>,
Randy Dunlap <rdunlap@...radead.org>,
"Ravi V. Shankar" <ravi.v.shankar@...el.com>,
Dave Martin <Dave.Martin@....com>,
Weijiang Yang <weijiang.yang@...el.com>,
Pengfei Xu <pengfei.xu@...el.com>,
Haitao Huang <haitao.huang@...el.com>,
Rick P Edgecombe <rick.p.edgecombe@...el.com>,
"Kirill A . Shutemov" <kirill.shutemov@...ux.intel.com>
Subject: Re: [PATCH v29 12/32] x86/mm: Update ptep_set_wrprotect() and
pmdp_set_wrprotect() for transition from _PAGE_DIRTY to _PAGE_COW
On Fri, Aug 20, 2021 at 11:11:41AM -0700, Yu-cheng Yu wrote:
> @@ -1322,6 +1340,24 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
> static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> unsigned long addr, pmd_t *pmdp)
> {
> + /*
> + * If Shadow Stack is enabled, pmd_wrprotect() moves _PAGE_DIRTY
> + * to _PAGE_COW (see comments at pmd_wrprotect()).
> + * When a thread reads a RW=1, Dirty=0 PMD and before changing it
> + * to RW=0, Dirty=0, another thread could have written to the page
> + * and the PMD is RW=1, Dirty=1 now. Use try_cmpxchg() to detect
> + * PMD changes and update old_pmd, then try again.
> + */
> + if (cpu_feature_enabled(X86_FEATURE_SHSTK)) {
> + pmd_t old_pmd, new_pmd;
> +
> + old_pmd = READ_ONCE(*pmdp);
> + do {
> + new_pmd = pmd_wrprotect(old_pmd);
> + } while (!try_cmpxchg((pmdval_t *)pmdp, (pmdval_t *)&old_pmd, pmd_val(new_pmd)));
>From the previous thread:
> If !(CONFIG_PGTABLE_LEVELS > 2), we don't have pmd_t.pmd.
So I guess you can do this, in line with how the pmd folding is done in
the rest of the mm headers. There's no need to make this more complex
than it is just so that 32-bit !PAE builds but where CET is not even
enabled.
---
diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
index df4ce715560a..7c0542997790 100644
--- a/arch/x86/include/asm/pgtable.h
+++ b/arch/x86/include/asm/pgtable.h
@@ -1340,6 +1340,7 @@ static inline pud_t pudp_huge_get_and_clear(struct mm_struct *mm,
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
{
+#if CONFIG_PGTABLE_LEVELS > 2
/*
* If Shadow Stack is enabled, pmd_wrprotect() moves _PAGE_DIRTY
* to _PAGE_COW (see comments at pmd_wrprotect()).
@@ -1354,10 +1355,11 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
old_pmd = READ_ONCE(*pmdp);
do {
new_pmd = pmd_wrprotect(old_pmd);
- } while (!try_cmpxchg((pmdval_t *)pmdp, (pmdval_t *)&old_pmd, pmd_val(new_pmd)));
+ } while (!try_cmpxchg(&pmdp->pmd, &old_pmd.pmd, new_pmd.pmd));
return;
}
+#endif
clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp);
}
--
Regards/Gruss,
Boris.
https://people.kernel.org/tglx/notes-about-netiquette
Powered by blists - more mailing lists