[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.11.1509081209180.25526@east.gentwo.org>
Date: Tue, 8 Sep 2015 12:10:54 -0500 (CDT)
From: Christoph Lameter <cl@...ux.com>
To: Jesper Dangaard Brouer <brouer@...hat.com>
cc: iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
linux-mm@...ck.org, netdev@...r.kernel.org
Subject: Re: [PATCH mm] slab: implement bulking for SLAB allocator
On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote:
> This test was a single CPU benchmark with no congestion or concurrency.
> But the code was compiled with CONFIG_NUMA=y.
>
> I don't know the slAb code very well, but the kmem_cache_node->list_lock
> looks like a scalability issue. I guess that is what you are referring
> to ;-)
That lock can be mitigated like in SLUB by increasing per cpu resources.
The problem in SLAB is the categorization of objects on free as to which
node they came from and the use of arrays of pointers to avoid freeing the
object to the object tracking metadata structures in the slab page.
The arrays of pointers have to be replicated for each node, each slab and
each processor.
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists