[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20160418022508.099008358@linuxfoundation.org>
Date: Mon, 18 Apr 2016 11:27:53 +0900
From: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
To: linux-kernel@...r.kernel.org
Cc: Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
stable@...r.kernel.org,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Christoph Lameter <cl@...ux.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
"Aneesh Kumar K.V" <aneesh.kumar@...ux.vnet.ibm.com>,
Michael Ellerman <mpe@...erman.id.au>
Subject: [PATCH 4.4 011/137] powerpc/mm: Fixup preempt underflow with huge pages
4.4-stable review patch. If anyone has any objections, please let me know.
------------------
From: Sebastian Siewior <bigeasy@...utronix.de>
commit 08a5bb2921e490939f78f38fd0d02858bb709942 upstream.
hugepd_free() used __get_cpu_var() once. Nothing ensured that the code
accessing the variable did not migrate from one CPU to another and soon
this was noticed by Tiejun Chen in 94b09d755462 ("powerpc/hugetlb:
Replace __get_cpu_var with get_cpu_var"). So we had it fixed.
Christoph Lameter was doing his __get_cpu_var() replaces and forgot
PowerPC. Then he noticed this and sent his fixed up batch again which
got applied as 69111bac42f5 ("powerpc: Replace __get_cpu_var uses").
The careful reader will noticed one little detail: get_cpu_var() got
replaced with this_cpu_ptr(). So now we have a put_cpu_var() which does
a preempt_enable() and nothing that does preempt_disable() so we
underflow the preempt counter.
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Christoph Lameter <cl@...ux.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
Signed-off-by: Greg Kroah-Hartman <gregkh@...uxfoundation.org>
---
arch/powerpc/mm/hugetlbpage.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -486,13 +486,13 @@ static void hugepd_free(struct mmu_gathe
{
struct hugepd_freelist **batchp;
- batchp = this_cpu_ptr(&hugepd_freelist_cur);
+ batchp = &get_cpu_var(hugepd_freelist_cur);
if (atomic_read(&tlb->mm->mm_users) < 2 ||
cpumask_equal(mm_cpumask(tlb->mm),
cpumask_of(smp_processor_id()))) {
kmem_cache_free(hugepte_cache, hugepte);
- put_cpu_var(hugepd_freelist_cur);
+ put_cpu_var(hugepd_freelist_cur);
return;
}
Powered by blists - more mailing lists