[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1360772031-27186-10-git-send-email-bigeasy@linutronix.de>
Date: Wed, 13 Feb 2013 17:13:48 +0100
From: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
To: Steven Rostedt <rostedt@...dmis.org>
Cc: linux-kernel@...r.kernel.org, linux-rt-users@...r.kernel.org,
Carsten Emde <C.Emde@...dl.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: [PATCH 09/12] FIX [2/2] slub: Tid must be retrieved from the percpu area of the current processor
From: Christoph Lameter <cl@...ux.com>
As Steven Rostedt has pointer out: Rescheduling could occur on a differnet processor
after the determination of the per cpu pointer and before the tid is retrieved.
This could result in allocation from the wrong node in slab_alloc.
The effect is much more severe in slab_free() where we could free to the freelist
of the wrong page.
The window for something like that occurring is pretty small but it is possible.
Signed-off-by: Christoph Lameter <cl@...ux.com>
Cc: Steven Rostedt <rostedt@...dmis.org>
Cc: Pekka Enberg <penberg@...nel.org>
Signed-off-by: Thomas Gleixner <tglx@...utronix.de>
Signed-off-by: Sebastian Andrzej Siewior <bigeasy@...utronix.de>
---
mm/slub.c | 12 +++++++++---
1 file changed, 9 insertions(+), 3 deletions(-)
diff --git a/mm/slub.c b/mm/slub.c
index fbf6810..634aabc 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2313,13 +2313,18 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
return NULL;
redo:
-
/*
* Must read kmem_cache cpu data via this cpu ptr. Preemption is
* enabled. We may switch back and forth between cpus while
* reading from one cpu area. That does not matter as long
* as we end up on the original cpu again when doing the cmpxchg.
+ *
+ * Preemption is disabled for the retrieval of the tid because that
+ * must occur from the current processor. We cannot allow rescheduling
+ * on a different processor between the determination of the pointer
+ * and the retrieval of the tid.
*/
+ preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);
/*
@@ -2329,7 +2334,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
* linked list in between.
*/
tid = c->tid;
- barrier();
+ preempt_enable();
object = c->freelist;
if (unlikely(!object || !node_match(c, node)))
@@ -2575,10 +2580,11 @@ static __always_inline void slab_free(struct kmem_cache *s,
* data is retrieved via this pointer. If we are on the same cpu
* during the cmpxchg then the free will succedd.
*/
+ preempt_disable();
c = __this_cpu_ptr(s->cpu_slab);
tid = c->tid;
- barrier();
+ preempt_enable();
if (likely(page == c->page)) {
set_freepointer(s, object, c->freelist);
--
1.7.10.4
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists