lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 22 Nov 2021 10:40:07 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Christoph Lameter <cl@...two.org>,
        Rustam Kovhaev <rkovhaev@...il.com>
Cc:     penberg@...nel.org, rientjes@...gle.com, iamjoonsoo.kim@....com,
        akpm@...ux-foundation.org, corbet@....net, djwong@...nel.org,
        david@...morbit.com, linux-kernel@...r.kernel.org,
        linux-mm@...ck.org, linux-doc@...r.kernel.org,
        gregkh@...uxfoundation.org, viro@...iv.linux.org.uk,
        dvyukov@...gle.com
Subject: Re: [PATCH v4] slob: add size header to all allocations

On 11/22/21 10:22, Christoph Lameter wrote:
> On Sun, 21 Nov 2021, Rustam Kovhaev wrote:
> 
>> Let's prepend both kmalloc() and kmem_cache_alloc() allocations with the
>> size header.
>> It simplifies the slab API and guarantees that both kmem_cache_alloc()
>> and kmalloc() memory could be freed by kfree().
>>
>> meminfo right after the system boot, x86-64 on xfs, without the patch:
>> Slab:              34700 kB
>>
>> the same, with the patch:
>> Slab:              35752 kB
> 
>> +#define SLOB_HDR_SIZE max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN)
> 
> Ok that is up to 128 bytes on some architectues. Mostly 32 or 64 bytes.
> 
>> @@ -307,6 +303,7 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node,
>>  	unsigned long flags;
>>  	bool _unused;
>>
>> +	size += SLOB_HDR_SIZE;
> 
> And every object now has this overhead? 128 bytes extra in extreme cases
> per object?
> 
> 
>> -	if (size < PAGE_SIZE - minalign) {
>> -		int align = minalign;
>> +	if (size < PAGE_SIZE - SLOB_HDR_SIZE) {
>> +		int align = SLOB_HDR_SIZE;
> 
> And the object is also aligned to 128 bytes boundaries on some
> architectures.
> 
> So a 4 byte object occupies 256 bytes in SLOB?
> 
> SLOB will no longer be a low memory overhead allocator then.

Hm good point, didn't realize those MINALIGN constants can be that large. I
think this overhead was already the case with SLOB for kmalloc caches, but
now it would be worse, as it would be for all kmem caches.

But it seems there's no reason we couldn't do better? I.e. use the value of
SLOB_HDR_SIZE only to align the beginning of actual object (and name the
define different than SLOB_HDR_SIZE). But the size of the header, where we
store the object lenght could be just a native word - 4 bytes on 32bit, 8 on
64bit. The address of the header shouldn't have a reason to be also aligned
to ARCH_KMALLOC_MINALIGN / ARCH_SLAB_MINALIGN as only SLOB itself processes
it and not the slab consumers which rely on those alignments?

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ