lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 16 Jan 2009 15:07:52 -0600 (CST)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	Nick Piggin <npiggin@...e.de>
cc:	Pekka Enberg <penberg@...helsinki.fi>,
	"Zhang, Yanmin" <yanmin_zhang@...ux.intel.com>,
	Lin Ming <ming.m.lin@...el.com>, linux-mm@...ck.org,
	linux-kernel@...r.kernel.org,
	Andrew Morton <akpm@...ux-foundation.org>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [patch] SLQB slab allocator

On Fri, 16 Jan 2009, Nick Piggin wrote:

> > handled by the same 2M TLB covering a 32k page. If the 4k pages are
> > dispersed then you may need 8 2M tlbs (which covers already a quarter of
> > the available 2M TLBs on nehalem f.e.) for which the larger alloc just
> > needs a single one.
>
> Yes I know that. But it's pretty theoretical IMO (and I could equally
> describe a theoretical situation where increased fragmentation in higher
> order slabs will result in worse TLB coverage).

Theoretical only for low sizes of memory. If you have terabytes of memory
then this becomes significant in a pretty fast way.

> > It has lists of free objects that are bound to a particular page. That
> > simplifies numa handling since all the objects in a "queue" (or page) have
> > the same NUMA characteristics.
>
> The same can be said of SLQB and SLAB as well.

Sorry not at all. SLAB and SLQB queue objects from different pages in the
same queue.

> > was assigned to a processor. Memory wastage may only occur because
> > each processor needs to have a separate page from which to allocate. SLAB
> > like designs needs to put a large number of objects in queues which may
> > keep a number of pages in the allocated pages pool although all objects
> > are unused. That does not occur with slub.
>
> That's wrong. SLUB keeps completely free pages on its partial lists, and
> also IIRC can keep free pages pinned in the per-cpu page. I have actually
> seen SLQB use less memory than SLUB in some situations for this reason.

As I sad it pins a single page in the per cpu page and uses that in a way
that you call a queue and I call a freelist.

SLUB keeps a few pages on the partial list right now because it tries to
avoid trips to the page allocator (which is quite slow). These could be
eliminated if the page allocator would work effectively. However that
number is a per node limit.

SLAB and SLUB can have large quantities of objects in their queues that
each can keep a single page out of circulation if its the last
object in that page. This is per queue thing and you have at least two
queues per cpu. SLAB has queues per cpu, per pair of cpu, per node and per
alien node for each node. That can pin quite a number of pages on large
systems. Note that SLAB has one per cpu whereas you have already 2 per
cpu? In SMP configurations this may mean that SLQB has more queues than
SLAB.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ