[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150616072107.GA13125@js1304-P5Q-DELUXE>
Date: Tue, 16 Jun 2015 16:21:07 +0900
From: Joonsoo Kim <iamjoonsoo.kim@....com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
Andrew Morton <akpm@...ux-foundation.org>,
netdev@...r.kernel.org, Alexander Duyck <alexander.duyck@...il.com>
Subject: Re: [PATCH 2/7] slub bulk alloc: extract objects from the per cpu
slab
On Mon, Jun 15, 2015 at 05:52:07PM +0200, Jesper Dangaard Brouer wrote:
> From: Christoph Lameter <cl@...ux.com>
>
> [NOTICE: Already in AKPM's quilt-queue]
>
> First piece: acceleration of retrieval of per cpu objects
>
> If we are allocating lots of objects then it is advantageous to disable
> interrupts and avoid the this_cpu_cmpxchg() operation to get these objects
> faster.
>
> Note that we cannot do the fast operation if debugging is enabled, because
> we would have to add extra code to do all the debugging checks. And it
> would not be fast anyway.
>
> Note also that the requirement of having interrupts disabled
> avoids having to do processor flag operations.
>
> Allocate as many objects as possible in the fast way and then fall back to
> the generic implementation for the rest of the objects.
>
> Signed-off-by: Christoph Lameter <cl@...ux.com>
> Cc: Jesper Dangaard Brouer <brouer@...hat.com>
> Cc: Pekka Enberg <penberg@...nel.org>
> Cc: David Rientjes <rientjes@...gle.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@....com>
> Signed-off-by: Andrew Morton <akpm@...ux-foundation.org>
> ---
> mm/slub.c | 27 ++++++++++++++++++++++++++-
> 1 file changed, 26 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 80f17403e503..d18f8e195ac4 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2759,7 +2759,32 @@ EXPORT_SYMBOL(kmem_cache_free_bulk);
> bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> void **p)
> {
> - return kmem_cache_alloc_bulk(s, flags, size, p);
> + if (!kmem_cache_debug(s)) {
> + struct kmem_cache_cpu *c;
> +
> + /* Drain objects in the per cpu slab */
> + local_irq_disable();
> + c = this_cpu_ptr(s->cpu_slab);
> +
> + while (size) {
> + void *object = c->freelist;
> +
> + if (!object)
> + break;
> +
> + c->freelist = get_freepointer(s, object);
> + *p++ = object;
> + size--;
> +
> + if (unlikely(flags & __GFP_ZERO))
> + memset(object, 0, s->object_size);
> + }
> + c->tid = next_tid(c->tid);
> +
> + local_irq_enable();
> + }
> +
> + return __kmem_cache_alloc_bulk(s, flags, size, p);
> }
> EXPORT_SYMBOL(kmem_cache_alloc_bulk);
Now I found that we need to call slab_pre_alloc_hook() before any operation
on kmem_cache to support kmemcg accounting. And, we need to call
slab_post_alloc_hook() on every allocated objects to support many
debugging features like as kasan and kmemleak
Thanks.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists