[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1460494375-30070-13-git-send-email-kamal@canonical.com>
Date: Tue, 12 Apr 2016 13:51:57 -0700
From: Kamal Mostafa <kamal@...onical.com>
To: linux-kernel@...r.kernel.org, stable@...r.kernel.org,
kernel-team@...ts.ubuntu.com
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>,
Christoph Lameter <cl@...ux.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>,
Michael Ellerman <mpe@...erman.id.au>,
Kamal Mostafa <kamal@...onical.com>
Subject: [PATCH 4.2.y-ckt 12/70] powerpc/mm: Fixup preempt underflow with huge pages
4.2.8-ckt8 -stable review patch. If anyone has any objections, please let me know.
---8<------------------------------------------------------------
From: Sebastian Siewior <bigeasy@...utronix.de>
commit 08a5bb2921e490939f78f38fd0d02858bb709942 upstream.
hugepd_free() used __get_cpu_var() once. Nothing ensured that the code
accessing the variable did not migrate from one CPU to another and soon
this was noticed by Tiejun Chen in 94b09d755462 ("powerpc/hugetlb:
Replace __get_cpu_var with get_cpu_var"). So we had it fixed.
Christoph Lameter was doing his __get_cpu_var() replaces and forgot
PowerPC. Then he noticed this and sent his fixed up batch again which
got applied as 69111bac42f5 ("powerpc: Replace __get_cpu_var uses").
The careful reader will noticed one little detail: get_cpu_var() got
replaced with this_cpu_ptr(). So now we have a put_cpu_var() which does
a preempt_enable() and nothing that does preempt_disable() so we
underflow the preempt counter.
Cc: Benjamin Herrenschmidt <benh@...nel.crashing.org>
Cc: Christoph Lameter <cl@...ux.com>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Reviewed-by: Aneesh Kumar K.V <aneesh.kumar@...ux.vnet.ibm.com>
Signed-off-by: Michael Ellerman <mpe@...erman.id.au>
Signed-off-by: Kamal Mostafa <kamal@...onical.com>
---
arch/powerpc/mm/hugetlbpage.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index bb0bd70..51d8bfe 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -467,13 +467,13 @@ static void hugepd_free(struct mmu_gather *tlb, void *hugepte)
{
struct hugepd_freelist **batchp;
- batchp = this_cpu_ptr(&hugepd_freelist_cur);
+ batchp = &get_cpu_var(hugepd_freelist_cur);
if (atomic_read(&tlb->mm->mm_users) < 2 ||
cpumask_equal(mm_cpumask(tlb->mm),
cpumask_of(smp_processor_id()))) {
kmem_cache_free(hugepte_cache, hugepte);
- put_cpu_var(hugepd_freelist_cur);
+ put_cpu_var(hugepd_freelist_cur);
return;
}
--
2.7.4
Powered by blists - more mailing lists