[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20170619074801.18fa2a16@mschwideX1>
Date: Mon, 19 Jun 2017 07:48:01 +0200
From: Martin Schwidefsky <schwidefsky@...ibm.com>
To: "Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Vineet Gupta <vgupta@...opsys.com>,
Russell King <linux@...linux.org.uk>,
Will Deacon <will.deacon@....com>,
Catalin Marinas <catalin.marinas@....com>,
Ralf Baechle <ralf@...ux-mips.org>,
"David S. Miller" <davem@...emloft.net>,
"Aneesh Kumar K . V" <aneesh.kumar@...ux.vnet.ibm.com>,
Heiko Carstens <heiko.carstens@...ibm.com>,
Andrea Arcangeli <aarcange@...hat.com>,
linux-arch@...r.kernel.org, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...nel.org>,
"H . Peter Anvin" <hpa@...or.com>,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [PATCHv2 1/3] x86/mm: Provide pmdp_establish() helper
On Thu, 15 Jun 2017 17:52:22 +0300
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com> wrote:
> We need an atomic way to setup pmd page table entry, avoiding races with
> CPU setting dirty/accessed bits. This is required to implement
> pmdp_invalidate() that doesn't loose these bits.
>
> On PAE we have to use cmpxchg8b as we cannot assume what is value of new pmd and
> setting it up half-by-half can expose broken corrupted entry to CPU.
>
> Signed-off-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
> Cc: Ingo Molnar <mingo@...nel.org>
> Cc: H. Peter Anvin <hpa@...or.com>
> Cc: Thomas Gleixner <tglx@...utronix.de>
> ---
> arch/x86/include/asm/pgtable-3level.h | 18 ++++++++++++++++++
> arch/x86/include/asm/pgtable.h | 14 ++++++++++++++
> 2 files changed, 32 insertions(+)
>
> diff --git a/arch/x86/include/asm/pgtable.h b/arch/x86/include/asm/pgtable.h
> index f5af95a0c6b8..a924fc6a96b9 100644
> --- a/arch/x86/include/asm/pgtable.h
> +++ b/arch/x86/include/asm/pgtable.h
> @@ -1092,6 +1092,20 @@ static inline void pmdp_set_wrprotect(struct mm_struct *mm,
> clear_bit(_PAGE_BIT_RW, (unsigned long *)pmdp);
> }
>
> +#ifndef pmdp_establish
> +#define pmdp_establish pmdp_establish
> +static inline pmd_t pmdp_establish(pmd_t *pmdp, pmd_t pmd)
> +{
> + if (IS_ENABLED(CONFIG_SMP)) {
> + return xchg(pmdp, pmd);
> + } else {
> + pmd_t old = *pmdp;
> + *pmdp = pmd;
> + return old;
> + }
> +}
> +#endif
> +
> /*
> * clone_pgd_range(pgd_t *dst, pgd_t *src, int count);
> *
For the s390 version of the pmdp_establish function we need the mm to be able
to do the TLB flush correctly. Can we please add a "struct vm_area_struct *vma"
argument to pmdp_establish analog to pmdp_invalidate?
The s390 patch would then look like this:
--
>From 4d4641249d5e826c21c522d149553e89d73fcd4f Mon Sep 17 00:00:00 2001
From: Martin Schwidefsky <schwidefsky@...ibm.com>
Date: Mon, 19 Jun 2017 07:40:11 +0200
Subject: [PATCH] s390/mm: add pmdp_establish
Define the pmdp_establish function to replace a pmd entry with a new
one and return the old value.
Signed-off-by: Martin Schwidefsky <schwidefsky@...ibm.com>
---
arch/s390/include/asm/pgtable.h | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/arch/s390/include/asm/pgtable.h b/arch/s390/include/asm/pgtable.h
index bb59a0aa3249..dedeecd5455c 100644
--- a/arch/s390/include/asm/pgtable.h
+++ b/arch/s390/include/asm/pgtable.h
@@ -1511,6 +1511,13 @@ static inline void pmdp_invalidate(struct vm_area_struct *vma,
pmdp_xchg_direct(vma->vm_mm, addr, pmdp, __pmd(_SEGMENT_ENTRY_EMPTY));
}
+static inline pmd_t pmdp_establish(struct vm_area_struct *vma,
+ pmd_t *pmdp, pmd_t pmd)
+{
+ return pmdp_xchg_direct(vma->vm_mm, addr, pmdp, pmd);
+}
+#define pmdp_establish pmdp_establish
+
#define __HAVE_ARCH_PMDP_SET_WRPROTECT
static inline void pmdp_set_wrprotect(struct mm_struct *mm,
unsigned long addr, pmd_t *pmdp)
--
2.11.2
--
blue skies,
Martin.
"Reality continues to ruin my life." - Calvin.
Powered by blists - more mailing lists