[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <95c1aedc-1bb3-480f-9c82-efc22d2beaf8@suse.cz>
Date: Fri, 15 Nov 2024 14:29:41 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: Hyeonggon Yoo <42.hyeyoo@...il.com>, Feng Tang <feng.tang@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Andrey Konovalov <andreyknvl@...il.com>, Marco Elver <elver@...gle.com>,
Alexander Potapenko <glider@...gle.com>, Dmitry Vyukov <dvyukov@...gle.com>,
Danilo Krummrich <dakr@...nel.org>, Narasimhan.V@....com,
linux-mm@...ck.org, kasan-dev@...glegroups.com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 2/3] mm/slub: Improve redzone check and zeroing for
krealloc()
On 11/14/24 14:34, Hyeonggon Yoo wrote:
> On Thu, Oct 17, 2024 at 12:42 AM Feng Tang <feng.tang@...el.com> wrote:
>>
>> For current krealloc(), one problem is its caller doesn't pass the old
>> request size, say the object is 64 bytes kmalloc one, but caller may
>> only requested 48 bytes. Then when krealloc() shrinks or grows in the
>> same object, or allocate a new bigger object, it lacks this 'original
>> size' information to do accurate data preserving or zeroing (when
>> __GFP_ZERO is set).
>>
>> Thus with slub debug redzone and object tracking enabled, parts of the
>> object after krealloc() might contain redzone data instead of zeroes,
>> which is violating the __GFP_ZERO guarantees. Good thing is in this
>> case, kmalloc caches do have this 'orig_size' feature. So solve the
>> problem by utilize 'org_size' to do accurate data zeroing and preserving.
>>
>> [Thanks to syzbot and V, Narasimhan for discovering kfence and big
>> kmalloc related issues in early patch version]
>>
>> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
>> Signed-off-by: Feng Tang <feng.tang@...el.com>
>> ---
>> mm/slub.c | 84 +++++++++++++++++++++++++++++++++++++++----------------
>> 1 file changed, 60 insertions(+), 24 deletions(-)
>>
>> diff --git a/mm/slub.c b/mm/slub.c
>> index 1d348899f7a3..958f7af79fad 100644
>> --- a/mm/slub.c
>> +++ b/mm/slub.c
>> @@ -4718,34 +4718,66 @@ static __always_inline __realloc_size(2) void *
>> __do_krealloc(const void *p, size_t new_size, gfp_t flags)
>> {
>> void *ret;
>> - size_t ks;
>> -
>> - /* Check for double-free before calling ksize. */
>> - if (likely(!ZERO_OR_NULL_PTR(p))) {
>> - if (!kasan_check_byte(p))
>> - return NULL;
>> - ks = ksize(p);
>> - } else
>> - ks = 0;
>> -
>> - /* If the object still fits, repoison it precisely. */
>> - if (ks >= new_size) {
>> - /* Zero out spare memory. */
>> - if (want_init_on_alloc(flags)) {
>> - kasan_disable_current();
>> + size_t ks = 0;
>> + int orig_size = 0;
>> + struct kmem_cache *s = NULL;
>> +
>> + /* Check for double-free. */
>> + if (unlikely(ZERO_OR_NULL_PTR(p)))
>> + goto alloc_new;
>
> nit: I think kasan_check_bytes() is the function that checks for double-free?
Hm yeah, moved the comment.
> Otherwise looks good to me,
> Reviewed-by: Hyeonggon Yoo <42.hyeyoo@...il.com>
Thanks!
Powered by blists - more mailing lists