lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 17 Jan 2013 19:28:44 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Christoph Lameter <cl@...ux.com>
Cc:	LKML <linux-kernel@...r.kernel.org>, linux-mm <linux-mm@...ck.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Pekka Enberg <penberg@...nel.org>,
	Matt Mackall <mpm@...enic.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	RT <linux-rt-users@...r.kernel.org>,
	Clark Williams <clark@...hat.com>,
	John Kacur <jkacur@...il.com>,
	"Luis Claudio R. Goncalves" <lgoncalv@...hat.com>
Subject: [RFC][PATCH v2] slub: Keep page and object in sync in
 slab_alloc_node()

In slab_alloc_node(), after the cpu_slab is assigned, if the task is
preempted and moves to another CPU, there's nothing keeping the page and
object in sync. The -rt kernel crashed because page was NULL and object
was not, and the node_match() dereferences page. Even though the crash
happened on -rt, there's nothing that's keeping this from happening on
mainline.

The easiest fix is to disable interrupts for the entire time from
acquiring the current CPU cpu_slab and assigning the object and page.
After that, it's fine to allow preemption.

Signed-off-by: Steven Rostedt <rostedt@...dmis.org>

diff --git a/mm/slub.c b/mm/slub.c
index ba2ca53..f0681db 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2325,6 +2325,7 @@ static __always_inline void *slab_alloc_node(struct kmem_cache *s,
 	struct kmem_cache_cpu *c;
 	struct page *page;
 	unsigned long tid;
+	unsigned long flags;
 
 	if (slab_pre_alloc_hook(s, gfpflags))
 		return NULL;
@@ -2337,7 +2338,10 @@ redo:
 	 * enabled. We may switch back and forth between cpus while
 	 * reading from one cpu area. That does not matter as long
 	 * as we end up on the original cpu again when doing the cmpxchg.
+	 *
+	 * But we need to sync the setting of page and object.
 	 */
+	local_irq_save(flags);
 	c = __this_cpu_ptr(s->cpu_slab);
 
 	/*
@@ -2347,10 +2351,11 @@ redo:
 	 * linked list in between.
 	 */
 	tid = c->tid;
-	barrier();
 
 	object = c->freelist;
 	page = c->page;
+	local_irq_restore(flags);
+
 	if (unlikely(!object || !node_match(page, node)))
 		object = __slab_alloc(s, gfpflags, node, addr, c);
 


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ