[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <d0927ca6-1710-5b2b-3682-6a80eb4e48d1@suse.cz>
Date: Tue, 30 Nov 2021 15:55:43 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: David Laight <David.Laight@...LAB.COM>,
Christoph Lameter <cl@...two.org>
Cc: Rustam Kovhaev <rkovhaev@...il.com>,
"penberg@...nel.org" <penberg@...nel.org>,
"rientjes@...gle.com" <rientjes@...gle.com>,
"iamjoonsoo.kim@....com" <iamjoonsoo.kim@....com>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"corbet@....net" <corbet@....net>,
"djwong@...nel.org" <djwong@...nel.org>,
"david@...morbit.com" <david@...morbit.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>,
"gregkh@...uxfoundation.org" <gregkh@...uxfoundation.org>,
"viro@...iv.linux.org.uk" <viro@...iv.linux.org.uk>,
"dvyukov@...gle.com" <dvyukov@...gle.com>
Subject: Re: [PATCH v4] slob: add size header to all allocations
On 11/23/21 11:18, David Laight wrote:
> From: Vlastimil Babka
>> Sent: 22 November 2021 10:46
>>
>> On 11/22/21 11:36, Christoph Lameter wrote:
>> > On Mon, 22 Nov 2021, Vlastimil Babka wrote:
>> >
>> >> But it seems there's no reason we couldn't do better? I.e. use the value of
>> >> SLOB_HDR_SIZE only to align the beginning of actual object (and name the
>> >> define different than SLOB_HDR_SIZE). But the size of the header, where we
>> >> store the object lenght could be just a native word - 4 bytes on 32bit, 8 on
>> >> 64bit. The address of the header shouldn't have a reason to be also aligned
>> >> to ARCH_KMALLOC_MINALIGN / ARCH_SLAB_MINALIGN as only SLOB itself processes
>> >> it and not the slab consumers which rely on those alignments?
>> >
>> > Well the best way would be to put it at the end of the object in order to
>> > avoid the alignment problem. This is a particular issue with SLOB because
>> > it allows multiple types of objects in a single page frame.
>> >
>> > If only one type of object would be allowed then the object size etc can
>> > be stored in the page struct.
>
> Or just a single byte that is the index of the associated free list structure.
> For 32bit and for the smaller kmalloc() area it may be reasonable to have
> a separate array indexed by the page of the address.
>
>> > So I guess placement at the beginning cannot be avoided. That in turn runs
>> > into trouble with the DMA requirements on some platforms where the
>> > beginning of the object has to be cache line aligned.
>>
>> It's no problem to have the real beginning of the object aligned, and the
>> prepended header not.
>
> I'm not sure that helps.
> The header can't share a cache line with the previous item (because it
> might be mapped for DMA) so will always take a full cache line.
So if this is true, then I think we already have a problem with SLOB today
(and AFAICS it's not even due to changes done by my 2019 commit 59bb47985c1d
("mm, sl[aou]b: guarantee natural alignment for kmalloc(power-of-two)" but
older).
Let's say we are on arm64 where (AFAICS):
ARCH_KMALLOC_MINALIGN = ARCH_DMA_MINALIGN = 128
ARCH_SLAB_MINALIGN = 64
The point is that ARCH_SLAB_MINALIGN is smaller than ARCH_DMA_MINALIGN.
Let's say we call kmalloc(64) and get a completely fresh page.
In SLOB, alloc() or rather __do_kmalloc_node() will calculate minalign to
max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN) thus 128.
It will call slob_alloc() for size of size+minalign=64+128=192, align and
align_offset = 128
Thus the allocation will use 128 bytes for the header, 64 for the object.
Both the header and object aligned to 128 bytes.
But the remaining 64 bytes of the second 128 bytes will remain free, as the
allocated size is 192 bytes:
| 128B header, aligned | 64B object | 64B free | rest also free |
If there's another kmalloc allocation, the 128 bytes aligment due to
ARCH_KMALLOC_MINALIGN will avoid it from using these 64 bytes, so that's
fine. But if there's a kmem_cache_alloc() from a cache serving <=64B
objects, it will be aligned to ARCH_SLAB_MINALIGN and happily use those 64
bytes that share the 128 block where the previous kmalloc allocation lies.
So either I missed something or we violate the rule that kmalloc() provides
blocks where ARCH_KMALLOC_MINALIGN is not just the alignment of their
beginning but also nothing else touches the N*ARCH_KMALLOC_MINALIGN area
containing the allocated object.
> There might me some strange scheme where you put the size at the end
> and the offset of the 'last end' into the page struct.
> The DMA API should let you safely read the size from an allocated
> buffer - but you can't modify it.
>
> There is also all the code that allocates 'power of 2' sized buffers
> under the assumption they are efficient - as soon as you add a size
> field that assumption just causes the sizes of item to (often) double.
>
> David
>
>> The code already does that before this patch for the
>> kmalloc power-of-two alignments, where e.g. the object can be aligned to 256
>> bytes, but the prepended header to a smaller ARCH_KMALLOC_MINALIGN /
>> ARCH_SLAB_MINALIGN.
>>
>> > I dont know but it seems that making slob that sophisticated is counter
>> > productive. Remove SLOB?
>>
>> I wouldn't mind, but somebody might :)
>>
>
> -
> Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1PT, UK
> Registration No: 1397386 (Wales)
>
Powered by blists - more mailing lists