lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 17 Mar 2009 13:58:37 -0400 (EDT)
From:	Christoph Lameter <cl@...ux-foundation.org>
To:	Nitin Gupta <ngupta@...are.org>
cc:	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3]: xvmalloc memory allocator

On Tue, 17 Mar 2009, Nitin Gupta wrote:

> Creating slabs for sizes in range, say, [32, 3/4*PAGE_SIZE] separated by
> 64bytes
> will require 48 slabs! Then for slab of each size class will have wastage
> due to
> unused slab objects in each class.
> Larger difference in slab sizes (and thus small no. of them), will surely
> cause too much
> wastage due to internal fragmentation.

The slabs that match existing other slabs of similar sizes will be aliased
and not created. Create the 48 slabs and you likely will only use 10 real
additional ones. The rest will just be pointing to existing ones.

> Another (more important) point to consider is that, use of slabs will
> eat-up vmalloc area to keep slab memory backed by VA space. On 32-bit
> systems, vmalloc area is small and limits amount of memory that can be
> allocated for compressed pages. With xvmalloc we map/unmap pages on
> demand thus removing dependence on vmalloc VA area.

Slab memory is not backed by vmalloc space.

> > Have you had a look at the SLOB approach?
> Nope. I will see how this may help.

Slob is another attempt to reduce wastage due to the rounding up of
object sizes to 2^N in SLAB/SLUB.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ