[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <a36ab9a1-f07a-42ca-bb11-5bd0c70660bb@suse.cz>
Date: Thu, 13 May 2021 10:51:41 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: iamjoonsoo.kim@....com, rientjes@...gle.com, penberg@...nel.org,
cl@...ux.com, linux-mm@...ck.org, linux-kernel@...r.kernel.org,
Marco Elver <elver@...gle.com>
Subject: Re: [PATCH v3] mm, slub: change run-time assertion in kmalloc_index()
to compile-time
On 5/13/21 8:28 AM, Hyeonggon Yoo wrote:
> On Wed, May 12, 2021 at 08:40:24PM -0700, Andrew Morton wrote:
>> On Thu, 13 May 2021 12:12:20 +0900 Hyeonggon Yoo <42.hyeyoo@...il.com> wrote:
>>
>> > On Wed, May 12, 2021 at 07:52:27PM -0700, Andrew Morton wrote:
>> > > This explodes in mysterious ways. The patch as I have it is appended,
>> > > for reference.
>> > >
>> > > gcc-10.3.0 allmodconfig.
>> > >
>> > > This patch suppresses the error:
>>
>> Ah, yes, of course, your patch changes kmalloc_index() to require that
>> it always is called with a constant `size'. kfence_test doesn't do
>> that.
>>
>> kfence is being a bit naughty here - the other kmalloc_index() callers
>> only comple up the call after verifying that `size' is a compile-time
>> constant.
Agreed.
>> Would something like this work?
I'd prefer if we kept kmalloc_index() for constant sizes only. The broken build
then warns anyone using it the wrong way that they shouldn't. Besides, it really
shouldn't be used outside of slab.
But if kfence test really needs this, we could perhaps extract the index
determining part out of kmalloc_slab().
Hmm or I guess the kfence tests could just use kmalloc_slab() directly?
>> include/linux/slab.h | 12 ++++++++----
>> mm/kfence/kfence_test.c | 4 ++--
>> 2 files changed, 10 insertions(+), 6 deletions(-)
>>
>> --- a/include/linux/slab.h~b
>> +++ a/include/linux/slab.h
>> @@ -374,7 +374,8 @@ static __always_inline enum kmalloc_cach
>> * Note: there's no need to optimize kmalloc_index because it's evaluated
>> * in compile-time.
>> */
>> -static __always_inline unsigned int kmalloc_index(size_t size)
>> +static __always_inline unsigned int kmalloc_index(size_t size,
>> + bool size_is_constant)
>> {
>> if (!size)
>> return 0;
>> @@ -410,7 +411,10 @@ static __always_inline unsigned int kmal
>> if (size <= 16 * 1024 * 1024) return 24;
>> if (size <= 32 * 1024 * 1024) return 25;
>>
>> - BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
>> + if (size_is_constant)
>> + BUILD_BUG_ON_MSG(1, "unexpected size in kmalloc_index()");
>> + else
>> + BUG();
>
>
> kfence is randomly generating size. because kfence is using non-constant
> size, we should do run-time assertion or compile-time assertion depending
> on situation.
>
> I think we can use __builtin_constant_p here. we don't need to modify
> kmalloc_index's prototype.
>
> so what about this?
> if you think it makes sense, I'll send patch v4.
>
> I used KMALLOC_MAX_CACHE_SIZE to assure it's safe size.
> it's safer than putting BUILD_BUG_ON_MSG(1, ...) to below if statements
> because KMALLOC_MAX_CACHE_SIZE can be less than 32MB.
>
> --- include/linux/slab.h.orig 2021-05-12 17:56:54.504738768 +0900
> +++ include/linux/slab.h 2021-05-13 15:06:25.724565850 +0900
> @@ -346,9 +346,18 @@ static __always_inline enum kmalloc_cach
> * 1 = 65 .. 96 bytes
> * 2 = 129 .. 192 bytes
> * n = 2^(n-1)+1 .. 2^n
> + *
> + * Note: there's no need to optimize kmalloc_index because it's evaluated
> + * in compile-time.
> */
> static __always_inline unsigned int kmalloc_index(size_t size)
> {
> + if (__builtin_constant_p(size)) {
> + BUILD_BUG_ON_MSG(size > KMALLOC_MAX_CACHE_SIZE , "unexpected size in kmalloc_index()");
> + } else if (size > KMALLOC_MAX_CACHE_SIZE) {
> + BUG();
> + }
> +
> if (!size)
> return 0;
>
> @@ -382,8 +391,6 @@ static __always_inline unsigned int kmal
> if (size <= 8 * 1024 * 1024) return 23;
> if (size <= 16 * 1024 * 1024) return 24;
> if (size <= 32 * 1024 * 1024) return 25;
> - if (size <= 64 * 1024 * 1024) return 26;
> - BUG();
>
> /* Will never be reached. Needed because the compiler may complain */
> return -1;
>
Powered by blists - more mailing lists