[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0000013a6aec10e3-304d4336-6d62-4b0f-9d06-e2ca4c6d8dcf-000000@email.amazonses.com>
Date: Tue, 16 Oct 2012 18:53:06 +0000
From: Christoph Lameter <cl@...ux.com>
To: David Rientjes <rientjes@...gle.com>
cc: Andi Kleen <andi@...stfloor.org>,
Ezequiel Garcia <elezegarcia@...il.com>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm@...ck.org, Tim Bird <tim.bird@...sony.com>,
celinux-dev@...ts.celinuxforum.org
Subject: Re: [Q] Default SLAB allocator
On Mon, 15 Oct 2012, David Rientjes wrote:
> This type of workload that really exhibits the problem with remote freeing
> would suggest that the design of slub itself is the problem here.
There is a tradeoff here between spatial data locality and temporal
locality. Slub always frees to the queue associated with the slab page
that the object originated from and therefore restores spatial data
locality. It will always serve all objects available in a slab page
before moving onto the next. Within a slab page it can consider temporal
locality.
Slab considers temporal locatlity more important and will not return
objects to the originating slab pages until they are no longer in use. It
(ideally) will serve objects in the order they were freed. This breaks
down in the NUMA case and the allocator got into a pretty bizarre queueing
configuration (with lots and lots of queues) as a result of our attempt to
preverse the free/alloc order per NUMA node (look at the alien caches
f.e.). Slub is an alternative to that approach.
Slab also has the problem of queue handling overhead due to the attempt to
throw objects out of the queues that are likely no more cache hot. Every
few seconds it needs to run queue cleaning through all queues that exists
on the system. How accurate it tracks the actual cache hotness of objects
is not clear.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists