[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190107105336.046274395@linuxfoundation.org>
Date: Mon, 7 Jan 2019 13:32:45 +0100
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org, Dave Hansen <dave.hansen@...ux.intel.com>,
Andy Lutomirski <luto@...nel.org>,
Dan Williams <dan.j.williams@...el.com>,
"Peter Zijlstra (Intel)" <peterz@...radead.org>,
"Kirill A. Shutemov" <kirill.shutemov@...ux.intel.com>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Rik van Riel <riel@...riel.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Thomas Gleixner <tglx@...utronix.de>, dave.hansen@...el.com,
Ingo Molnar <mingo@...nel.org>
Subject: [PATCH 4.14 057/101] x86/mm: Drop usage of __flush_tlb_all() in kernel_physical_mapping_init()
4.14-stable review patch. If anyone has any objections, please let me know.
------------------
From: Dan Williams <dan.j.williams@...el.com>
commit ba6f508d0ec4adb09f0a939af6d5e19cdfa8667d upstream.
Commit:
f77084d96355 "x86/mm/pat: Disable preemption around __flush_tlb_all()"
addressed a case where __flush_tlb_all() is called without preemption
being disabled. It also left a warning to catch other cases where
preemption is not disabled.
That warning triggers for the memory hotplug path which is also used for
persistent memory enabling:
WARNING: CPU: 35 PID: 911 at ./arch/x86/include/asm/tlbflush.h:460
RIP: 0010:__flush_tlb_all+0x1b/0x3a
[..]
Call Trace:
phys_pud_init+0x29c/0x2bb
kernel_physical_mapping_init+0xfc/0x219
init_memory_mapping+0x1a5/0x3b0
arch_add_memory+0x2c/0x50
devm_memremap_pages+0x3aa/0x610
pmem_attach_disk+0x585/0x700 [nd_pmem]
Andy wondered why a path that can sleep was using __flush_tlb_all() [1]
and Dave confirmed the expectation for TLB flush is for modifying /
invalidating existing PTE entries, but not initial population [2]. Drop
the usage of __flush_tlb_all() in phys_{p4d,pud,pmd}_init() on the
expectation that this path is only ever populating empty entries for the
linear map. Note, at linear map teardown time there is a call to the
all-cpu flush_tlb_all() to invalidate the removed mappings.
[1]: https://lkml.kernel.org/r/9DFD717D-857D-493D-A606-B635D72BAC21@amacapital.net
[2]: https://lkml.kernel.org/r/749919a4-cdb1-48a3-adb4-adb81a5fa0b5@intel.com
[ mingo: Minor readability edits. ]
Suggested-by: Dave Hansen <dave.hansen@...ux.intel.com>
Reported-by: Andy Lutomirski <luto@...nel.org>
Signed-off-by: Dan Williams <dan.j.williams@...el.com>
Acked-by: Peter Zijlstra (Intel) <peterz@...radead.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@...ux.intel.com>
Cc: <stable@...r.kernel.org>
Cc: Borislav Petkov <bp@...en8.de>
Cc: H. Peter Anvin <hpa@...or.com>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: Peter Zijlstra <peterz@...radead.org>
Cc: Rik van Riel <riel@...riel.com>
Cc: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Cc: Thomas Gleixner <tglx@...utronix.de>
Cc: dave.hansen@...el.com
Fixes: f77084d96355 ("x86/mm/pat: Disable preemption around __flush_tlb_all()")
Link: http://lkml.kernel.org/r/154395944713.32119.15611079023837132638.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Ingo Molnar <mingo@...nel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/x86/mm/init_64.c | 6 ------
1 file changed, 6 deletions(-)
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -574,7 +574,6 @@ phys_pud_init(pud_t *pud_page, unsigned
paddr_end,
page_size_mask,
prot);
- __flush_tlb_all();
continue;
}
/*
@@ -617,7 +616,6 @@ phys_pud_init(pud_t *pud_page, unsigned
pud_populate(&init_mm, pud, pmd);
spin_unlock(&init_mm.page_table_lock);
}
- __flush_tlb_all();
update_page_count(PG_LEVEL_1G, pages);
@@ -658,7 +656,6 @@ phys_p4d_init(p4d_t *p4d_page, unsigned
paddr_last = phys_pud_init(pud, paddr,
paddr_end,
page_size_mask);
- __flush_tlb_all();
continue;
}
@@ -670,7 +667,6 @@ phys_p4d_init(p4d_t *p4d_page, unsigned
p4d_populate(&init_mm, p4d, pud);
spin_unlock(&init_mm.page_table_lock);
}
- __flush_tlb_all();
return paddr_last;
}
@@ -723,8 +719,6 @@ kernel_physical_mapping_init(unsigned lo
if (pgd_changed)
sync_global_pgds(vaddr_start, vaddr_end - 1);
- __flush_tlb_all();
-
return paddr_last;
}
Powered by blists - more mailing lists