[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <154362450646.2367148.16448130381211111341.stgit@dwillia2-desk3.amr.corp.intel.com>
Date: Fri, 30 Nov 2018 16:35:06 -0800
From: Dan Williams <dan.j.williams@...el.com>
To: tglx@...utronix.de
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Peter Zijlstra <peterz@...radead.org>,
Borislav Petkov <bp@...en8.de>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Dave Hansen <dave.hansen@...el.com>, stable@...r.kernel.org,
x86@...nel.org, linux-kernel@...r.kernel.org
Subject: [PATCH v2 0/5] x86/mm: Drop usage of __flush_tlb_all() in
kernel_physical_mapping_init()
Changes since v1 [1]:
* Introduce _safe versions of the set_pte family of helpers (Dave)
* Fix the validation check to accept rewriting the same entry (Peter)
* Fix up the email reference links in the changelogs (Peter)
[1]: https://lkml.org/lkml/2018/11/20/300
---
>>From patch 5:
Commit f77084d96355 "x86/mm/pat: Disable preemption around
__flush_tlb_all()" addressed a case where __flush_tlb_all() is called
without preemption being disabled. It also left a warning to catch other
cases where preemption is not disabled. That warning triggers for the
memory hotplug path which is also used for persistent memory enabling:
WARNING: CPU: 35 PID: 911 at ./arch/x86/include/asm/tlbflush.h:460
RIP: 0010:__flush_tlb_all+0x1b/0x3a
[..]
Call Trace:
phys_pud_init+0x29c/0x2bb
kernel_physical_mapping_init+0xfc/0x219
init_memory_mapping+0x1a5/0x3b0
arch_add_memory+0x2c/0x50
devm_memremap_pages+0x3aa/0x610
pmem_attach_disk+0x585/0x700 [nd_pmem]
Andy wondered why a path that can sleep was using __flush_tlb_all() [1]
and Dave confirmed the expectation for TLB flush is for modifying /
invalidating existing pte entries, but not initial population [2]. Drop
the usage of __flush_tlb_all() in phys_{p4d,pud,pmd}_init() on the
expectation that this path is only ever populating empty entries for the
linear map. Note, at linear map teardown time there is a call to the
all-cpu flush_tlb_all() to invalidate the removed mappings.
Additionally, Dave wanted some runtime assurances that
kernel_physical_mapping_init() is only populating and not changing
existing page table entries. Patches 1 - 4 are implementing a new
set_pte() family of helpers for that purpose.
Patch 5 is tagged for -stable because the false positive warning is now
showing up on 4.19-stable kernels. Patches 1 - 4 are not tagged for
-stable, but if the sanity checking is needed please mark them as such.
The hang that was observed while developing the sanity checking
implementation was resolved by Peter's suggestion to not trigger when
the same pte value is being rewritten.
---
Dan Williams (5):
generic/pgtable: Make {pmd,pud}_same() unconditionally available
generic/pgtable: Introduce {p4d,pgd}_same()
generic/pgtable: Introduce set_pte_safe()
x86/mm: Validate kernel_physical_mapping_init() pte population
x86/mm: Drop usage of __flush_tlb_all() in kernel_physical_mapping_init()
arch/x86/include/asm/pgalloc.h | 27 +++++++++++++++
arch/x86/mm/init_64.c | 30 +++++++----------
include/asm-generic/5level-fixup.h | 1 +
include/asm-generic/pgtable-nop4d-hack.h | 1 +
include/asm-generic/pgtable-nop4d.h | 1 +
include/asm-generic/pgtable-nopud.h | 1 +
include/asm-generic/pgtable.h | 53 +++++++++++++++++++++++++-----
7 files changed, 87 insertions(+), 27 deletions(-)
Powered by blists - more mailing lists