[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20150616145336.1cacbfb88ff55b0e088676c3@linux-foundation.org>
Date: Tue, 16 Jun 2015 14:53:36 -0700
From: Andrew Morton <akpm@...ux-foundation.org>
To: Jesper Dangaard Brouer <brouer@...hat.com>
Cc: linux-mm@...ck.org, Christoph Lameter <cl@...ux.com>,
netdev@...r.kernel.org, Alexander Duyck <alexander.duyck@...il.com>
Subject: Re: [PATCH 6/7] slub: improve bulk alloc strategy
On Mon, 15 Jun 2015 17:52:46 +0200 Jesper Dangaard Brouer <brouer@...hat.com> wrote:
> Call slowpath __slab_alloc() from within the bulk loop, as the
> side-effect of this call likely repopulates c->freelist.
>
> Choose to reenable local IRQs while calling slowpath.
>
> Saving some optimizations for later. E.g. it is possible to
> extract parts of __slab_alloc() and avoid the unnecessary and
> expensive (37 cycles) local_irq_{save,restore}. For now, be
> happy calling __slab_alloc() this lower icache impact of this
> func and I don't have to worry about correctness.
>
> ...
>
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2776,8 +2776,23 @@ bool kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> for (i = 0; i < size; i++) {
> void *object = c->freelist;
>
> - if (!object)
> - break;
> + if (unlikely(!object)) {
> + c->tid = next_tid(c->tid);
> + local_irq_enable();
> +
> + /* Invoke slow path one time, then retry fastpath
> + * as side-effect have updated c->freelist
> + */
That isn't very grammatical.
Block comments are formatted
/*
* like this
*/
please.
> + p[i] = __slab_alloc(s, flags, NUMA_NO_NODE,
> + _RET_IP_, c);
> + if (unlikely(!p[i])) {
> + __kmem_cache_free_bulk(s, i, p);
> + return false;
> + }
> + local_irq_disable();
> + c = this_cpu_ptr(s->cpu_slab);
> + continue; /* goto for-loop */
> + }
>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists