[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1111221033350.28197@router.home>
Date: Tue, 22 Nov 2011 10:36:44 -0600 (CST)
From: Christoph Lameter <cl@...ux.com>
To: Eric Dumazet <eric.dumazet@...il.com>
cc: Markus Trippelsdorf <markus@...ppelsdorf.de>,
Christian Kujau <lists@...dbynature.de>,
Benjamin Herrenschmidt <benh@...nel.crashing.org>,
"Alex,Shi" <alex.shi@...el.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Pekka Enberg <penberg@...nel.org>,
Matt Mackall <mpm@...enic.com>,
"netdev@...r.kernel.org" <netdev@...r.kernel.org>,
Tejun Heo <tj@...nel.org>
Subject: Re: WARNING: at mm/slub.c:3357, kernel BUG at mm/slub.c:3413
On Tue, 22 Nov 2011, Eric Dumazet wrote:
> Le mardi 22 novembre 2011 à 10:20 -0600, Christoph Lameter a écrit :
> > Argh. The Redzoning (and the general object pad initialization) is outside
> > of the slab_lock now. So I get wrong positives on those now. That
> > is already in 3.1 as far as I know. To solve that we would have to cover a
> > much wider area in the alloc and free with the slab lock.
> >
> > But I do not get the count mismatches that you saw. Maybe related to
> > preemption. Will try that next.
>
> Also I note the checks (redzoning and all features) that should be done
> in kfree() are only done on slow path ???
Yes debugging forces the slow paths.
> I am considering adding a "quarantine" capability : each cpu will
> maintain in its struct kmem_cache_cpu a FIFO list of "s->quarantine_max"
> freed objects.
>
> So it should be easier to track use after free bugs, setting
> quarantine_max to a big value.
It may be easier to simply disable interrupts early in __slab_free
if debugging is on. Doesnt look nice right now. Draft patch (not tested
yet):
---
mm/slub.c | 15 ++++++++++++---
1 file changed, 12 insertions(+), 3 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-11-22 09:04:47.000000000 -0600
+++ linux-2.6/mm/slub.c 2011-11-22 10:33:12.000000000 -0600
@@ -2391,8 +2391,13 @@ static void __slab_free(struct kmem_cach
stat(s, FREE_SLOWPATH);
- if (kmem_cache_debug(s) && !free_debug_processing(s, page, x, addr))
- return;
+ if (kmem_cache_debug(s)) {
+ local_irq_save(flags);
+ if (!free_debug_processing(s, page, x, addr)) {
+ local_irq_restore(flags);
+ return;
+ }
+ }
do {
prior = page->freelist;
@@ -2422,8 +2427,10 @@ static void __slab_free(struct kmem_cach
* Otherwise the list_lock will synchronize with
* other processors updating the list of slabs.
*/
- spin_lock_irqsave(&n->list_lock, flags);
+ if (!kmem_cache_debug(s))
+ local_irq_save(flags);
+ spin_lock(&n->list_lock);
}
}
inuse = new.inuse;
@@ -2448,6 +2455,8 @@ static void __slab_free(struct kmem_cach
*/
if (was_frozen)
stat(s, FREE_FROZEN);
+ if (kmem_cache_debug(s))
+ local_irq_restore(flags);
return;
}
Powered by blists - more mailing lists