[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.00.1105311155050.19928@router.home>
Date: Tue, 31 May 2011 11:55:58 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: David Rientjes <rientjes@...gle.com>
cc: Pekka Enberg <penberg@...helsinki.fi>,
Eric Dumazet <eric.dumazet@...il.com>,
"H. Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [slubllv6 01/17] slub: Push irq disable into allocate_slab()
On Thu, 26 May 2011, David Rientjes wrote:
> > + if (flags & __GFP_WAIT)
> > + local_irq_disable();
> > +
> > + if (!page)
> > + return NULL;
> > +
>
> This changes the meaning of ORDER_FALLBACK from its previous meaning,
> which was "number of times the preferred order could not be allocated and
> then minimum order could be allocated" to "number of times the preferred
> order could not be allocated, regardless of whether the minimum order
> allocation was successful." The former is the true meaning of the word
> "fallback," so is this semantics change avoidable? Otherwise it seems
> like the statistic should be renamed (NEW_SLAB_FAIL?)
Ok that needs to be fixed:
Subject: slub: Push irq disable into allocate_slab()
Do the irq handling in allocate_slab() instead of __slab_alloc().
__slab_alloc() is already cluttered and allocate_slab() is already
fiddling around with gfp flags.
v6->v7:
Only increment ORDER_FALLBACK if we get a page during fallback
Signed-off-by: Christoph Lameter <cl@...ux.com>
---
mm/slub.c | 23 +++++++++++++----------
1 file changed, 13 insertions(+), 10 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c 2011-05-26 16:13:58.085604969 -0500
+++ linux-2.6/mm/slub.c 2011-05-31 09:42:08.102989621 -0500
@@ -1187,6 +1187,11 @@ static struct page *allocate_slab(struct
struct kmem_cache_order_objects oo = s->oo;
gfp_t alloc_gfp;
+ flags &= gfp_allowed_mask;
+
+ if (flags & __GFP_WAIT)
+ local_irq_enable();
+
flags |= s->allocflags;
/*
@@ -1203,12 +1208,17 @@ static struct page *allocate_slab(struct
* Try a lower order alloc if possible
*/
page = alloc_slab_page(flags, node, oo);
- if (!page)
- return NULL;
- stat(s, ORDER_FALLBACK);
+ if (page)
+ stat(s, ORDER_FALLBACK);
}
+ if (flags & __GFP_WAIT)
+ local_irq_disable();
+
+ if (!page)
+ return NULL;
+
if (kmemcheck_enabled
&& !(s->flags & (SLAB_NOTRACK | DEBUG_DEFAULT_FLAGS))) {
int pages = 1 << oo_order(oo);
@@ -1849,15 +1859,8 @@ new_slab:
goto load_freelist;
}
- gfpflags &= gfp_allowed_mask;
- if (gfpflags & __GFP_WAIT)
- local_irq_enable();
-
page = new_slab(s, gfpflags, node);
- if (gfpflags & __GFP_WAIT)
- local_irq_disable();
-
if (page) {
c = __this_cpu_ptr(s->cpu_slab);
stat(s, ALLOC_SLAB);
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists