[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150908175451.2ce83a0b@redhat.com>
Date: Tue, 8 Sep 2015 17:54:51 +0200
From: Jesper Dangaard Brouer <brouer@...hat.com>
To: Christoph Lameter <cl@...ux.com>
Cc: iamjoonsoo.kim@....com, akpm@...ux-foundation.org,
linux-mm@...ck.org, netdev@...r.kernel.org, brouer@...hat.com
Subject: Re: [PATCH mm] slab: implement bulking for SLAB allocator
On Tue, 8 Sep 2015 10:22:32 -0500 (CDT)
Christoph Lameter <cl@...ux.com> wrote:
> On Tue, 8 Sep 2015, Jesper Dangaard Brouer wrote:
>
> > Also notice how well bulking maintains the performance when the bulk
> > size increases (which is a soar spot for the slub allocator).
>
> Well you are not actually completing the free action in SLAB. This is
> simply queueing the item to be freed later. Also was this test done on a
> NUMA system? Alien caches at some point come into the picture.
This test was a single CPU benchmark with no congestion or concurrency.
But the code was compiled with CONFIG_NUMA=y.
I don't know the slAb code very well, but the kmem_cache_node->list_lock
looks like a scalability issue. I guess that is what you are referring
to ;-)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Sr. Network Kernel Developer at Red Hat
Author of http://www.iptv-analyzer.org
LinkedIn: http://www.linkedin.com/in/brouer
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists