[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cd094c36-f22a-0a25-d5ee-7d502c5d50aa@suse.cz>
Date: Wed, 26 Oct 2022 12:08:18 +0200
From: Vlastimil Babka <vbabka@...e.cz>
To: Kees Cook <keescook@...omium.org>,
Andrew Morton <akpm@...ux-foundation.org>
Cc: Andrey Konovalov <andreyknvl@...il.com>,
David Rientjes <rientjes@...gle.com>,
Marco Elver <elver@...gle.com>,
Vincenzo Frascino <vincenzo.frascino@....com>,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
linux-hardening@...r.kernel.org
Subject: Re: [PATCH v3] mempool: Do not use ksize() for poisoning
On 10/26/22 12:02, Vlastimil Babka wrote:
> On 10/26/22 01:36, Kees Cook wrote:
>> Nothing appears to be using ksize() within the kmalloc-backed mempools
>> except the mempool poisoning logic. Use the actual pool size instead
>> of the ksize() to avoid needing any special handling of the memory as
>> needed by KASAN, UBSAN_BOUNDS, nor FORTIFY_SOURCE.
>>
>> Suggested-by: Vlastimil Babka <vbabka@...e.cz>
>> Link: https://lore.kernel.org/lkml/f4fc52c4-7c18-1d76-0c7a-4058ea2486b9@suse.cz/
>> Cc: Andrey Konovalov <andreyknvl@...il.com>
>> Cc: David Rientjes <rientjes@...gle.com>
>> Cc: Marco Elver <elver@...gle.com>
>> Cc: Vincenzo Frascino <vincenzo.frascino@....com>
>> Cc: Andrew Morton <akpm@...ux-foundation.org>
>> Cc: linux-mm@...ck.org
>> Signed-off-by: Kees Cook <keescook@...omium.org>
>
> Acked-by: Vlastimil Babka <vbabka@...e.cz>
Ah and since the subject was updated too, note this is supposed to
replace/fixup the patch in mm-unstable:
mempool-use-kmalloc_size_roundup-to-match-ksize-usage.patch
>> ---
>> v3: remove ksize() calls instead of adding kmalloc_roundup_size() calls (vbabka)
>> v2: https://lore.kernel.org/lkml/20221018090323.never.897-kees@kernel.org/
>> v1: https://lore.kernel.org/lkml/20220923202822.2667581-14-keescook@chromium.org/
>> ---
>> mm/mempool.c | 6 +++---
>> 1 file changed, 3 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/mempool.c b/mm/mempool.c
>> index 96488b13a1ef..54204065037d 100644
>> --- a/mm/mempool.c
>> +++ b/mm/mempool.c
>> @@ -58,7 +58,7 @@ static void check_element(mempool_t *pool, void *element)
>> {
>> /* Mempools backed by slab allocator */
>> if (pool->free == mempool_free_slab || pool->free == mempool_kfree) {
>> - __check_element(pool, element, ksize(element));
>> + __check_element(pool, element, (size_t)pool->pool_data);
>> } else if (pool->free == mempool_free_pages) {
>> /* Mempools backed by page allocator */
>> int order = (int)(long)pool->pool_data;
>> @@ -81,7 +81,7 @@ static void poison_element(mempool_t *pool, void *element)
>> {
>> /* Mempools backed by slab allocator */
>> if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc) {
>> - __poison_element(element, ksize(element));
>> + __poison_element(element, (size_t)pool->pool_data);
>> } else if (pool->alloc == mempool_alloc_pages) {
>> /* Mempools backed by page allocator */
>> int order = (int)(long)pool->pool_data;
>> @@ -112,7 +112,7 @@ static __always_inline void kasan_poison_element(mempool_t *pool, void *element)
>> static void kasan_unpoison_element(mempool_t *pool, void *element)
>> {
>> if (pool->alloc == mempool_alloc_slab || pool->alloc == mempool_kmalloc)
>> - kasan_unpoison_range(element, __ksize(element));
>> + kasan_unpoison_range(element, (size_t)pool->pool_data);
>> else if (pool->alloc == mempool_alloc_pages)
>> kasan_unpoison_pages(element, (unsigned long)pool->pool_data,
>> false);
>
Powered by blists - more mailing lists