lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <49C3C321.8080508@vflare.org>
Date:	Fri, 20 Mar 2009 21:54:01 +0530
From:	Nitin Gupta <nitingupta910@...il.com>
To:	Christoph Lameter <cl@...ux-foundation.org>
CC:	Pekka Enberg <penberg@...helsinki.fi>, linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/3] xvmalloc memory allocator

Christoph Lameter wrote:
> On Fri, 20 Mar 2009, Nitin Gupta wrote:
> 
>> xvmalloc is a memory allocator designed specifically for compcache project.
> 
> Its an allocator that is highmem capable? Looks like an entirely new
> animal to me.
> 
>> * Features:
>>  - Low metadata overhead (just 4 bytes per object)
> 
> SLUB has 0 byte overhead. SLOB has 2 bytes.

SLUB: 0 metadata but lot of wastage due to rounding-off.
SLOB: 2 bytes header but I think we should really return object aligned to
at least 4 bytes on most archs (including x86 and x64). Apart from smaller
header, it has too many other problems as I detailed in previous mail.

> 
>>  - O(1) Alloc/Free - except when we have to call system page allocator to
>>     get additional memory.
> 
> SLOB is not O(1) okay but the others are.

Only SLOB is currently there for good packing, so it is the only one worth
comparing against. We loose this mem-compression game if we waste too much :)

> 
>>  - Very low fragmentation: In all tests, xvMalloc memory usage is within 12%
>>     of "Ideal".
> 
> Maybe try a fair test instead of relying on kmalloc rounding up to
> the next power of 2 size?
> 

Okay, for testing, I will make some wrappers around SLOB that directly use
slob_alloc() to avoid any of this rounding-off. I hope to show some data on
this soon. But considering other SLOB issues, this should not, hopefully,
be a blocker for compcache.


>> One of the main highlights is that it maps pages only when required.
>> So, it does not hog vmalloc area which is very small on 32-bit systems.
> 
> Got some difficulty understanding what is going on here. So this allocator
> is highmem capable? Doesnt that mean that you must make function calls to
> ensure that an object is mapped before accessing it.
> 

Yes. xvmalloc caller gets <pagenum, offset> pair. Caller has to
separately map it to get dereferenceable pointer.

>> +#include "xvmalloc_int.h"
>> +
>> +static void stat_inc(u64 *value)
>> +{
>> +	*value = *value + 1;
>> +}
> 
> (*value) += 1?
> 
Looks better.

> atomic_inc?
> 
There is really no need to make these stat variables atomic.

> local_inc?
> 

This one looks useful when we work on making compcache code scalable.

>> +static void bitmap_set(u32 *map, u32 idx)
>> +{
>> +	*map |= (u32)(1 << idx);
>> +}
> 
> We have bitops for that purpose. Please use those.
> 

Ok.

>> +/*
>> + * Get index of free list having blocks of size greater than
>> + * or equal to requested size.
>> + */
>> +static u32 get_index(u32 size)
>> +{
>> +	size = (size + FL_DELTA_MASK) & ~FL_DELTA_MASK;
> 
> See the ALIGN macro.
> 

ALIGN is doing same thing - will use it instead.

>> +/*
>> + * Allocate a memory page. Called when a pool needs to grow.
>> + */
>> +static u32 xv_alloc_page(void)
>> +{
>> +	struct page *page;
>> +
>> +	page = alloc_page(GFP_NOIO | __GFP_HIGHMEM);
> 
> Yes a highmem based allocator!!!!
> 
>> +
>> +	if (unlikely(!page))
>> +		return INVALID_PGNUM;
> 
> Return NULL?
> 
>> +#define INVALID_PGNUM	((u32)(-1))
> 
> NULL
> 

okay, INVALID_PGNUM -> NULL

>> +#define ROUNDUP(x, y)	(((x) + (y) - 1) / (y) * (y))
> 
> There is a global macro available for that purpose
> 
Ok, will use that.

>> +/* Each individual bitmap is 32-bit */
>> +#define BITMAP_BITS	32
> 
> Use kernel constants please BITS_PER_LONG here.
> 
Ok.

>> +#define ROUNDUP_ALIGN(x)	(((x) + XV_ALIGN_MASK) & ~XV_ALIGN_MASK)
> 
> == ALIGN?

Yup.

> 
> Well I think this allocator is pretty useful for systems that depend to a
> large degree on highmem. This is basically x86 32 bit configuration swith
> more than 1G memmory.
> 

Not just highmem.
1) Its O(1) and as a side effect causes less cache pollution than SLOB which
does this absolutely crazy freelist scanning.
2) Better packing (will post comparison data soon).

I think, with a bit playing around with interfaces, it can be turned into
general purpose allocator (this will most probably lack highmem support).


Thanks for your feedback.
Nitin

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ