[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20221021032405.1825078-1-feng.tang@intel.com>
Date: Fri, 21 Oct 2022 11:24:02 +0800
From: Feng Tang <feng.tang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>,
Andrey Konovalov <andreyknvl@...il.com>,
Kees Cook <keescook@...omium.org>
Cc: Dave Hansen <dave.hansen@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, kasan-dev@...glegroups.com,
Feng Tang <feng.tang@...el.com>
Subject: [PATCH v7 0/3] mm/slub: extend redzone check for kmalloc objects
kmalloc's API family is critical for mm, and one of its nature is that
it will round up the request size to a fixed one (mostly power of 2).
When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
could be allocated, so there is an extra space than what is originally
requested.
This patchset tries to extend the redzone sanity check to the extra
kmalloced buffer than requested, to better detect un-legitimate access
to it. (dependson SLAB_STORE_USER & SLAB_RED_ZONE)
The redzone part has been tested with code below:
for (shift = 3; shift <= 12; shift++) {
size = 1 << shift;
buf = kmalloc(size + 4, GFP_KERNEL);
/* We have 96, 196 kmalloc size, which is not power of 2 */
if (size == 64 || size == 128)
oob_size = 16;
else
oob_size = size - 4;
memset(buf + size + 4, 0xee, oob_size);
kfree(buf);
}
(This is against slab tree's 'for-6.2/slub-sysfs' branch, with
HEAD 54736f702526)
Please help to review, thanks!
- Feng
---
Changelogs:
since v6:
* 1/4 patch of kmalloc memory wastage debug patch was merged
to 6.1-rc1, so drop it
* refine the kasan patch by extending existing APIs and hiding
kasan internal data structure info (Andrey Konovalov)
* only reduce zeroing size when slub debug is enabled to
avoid security risk (Kees Cook/Andrey Konovalov)
* collect Acked-by tag from Hyeonggon Yoo
since v5:
* Refine code/comments and add more perf info in commit log for
kzalloc change (Hyeonggoon Yoo)
* change the kasan param name and refine comments about
kasan+redzone handling (Andrey Konovalov)
* put free pointer in meta data to make redzone check cover all
kmalloc objects (Hyeonggoon Yoo)
since v4:
* fix a race issue in v3, by moving kmalloc debug init into
alloc_debug_processing (Hyeonggon Yoo)
* add 'partial_conext' for better parameter passing in get_partial()
call chain (Vlastimil Babka)
* update 'slub.rst' for 'alloc_traces' part (Hyeonggon Yoo)
* update code comments for 'orig_size'
since v3:
* rebase against latest post 6.0-rc1 slab tree's 'for-next' branch
* fix a bug reported by 0Day, that kmalloc-redzoned data and kasan's
free meta data overlaps in the same kmalloc object data area
since v2:
* rebase against slab tree's 'for-next' branch
* fix pointer handling (Kefeng Wang)
* move kzalloc zeroing handling change to a separate patch (Vlastimil Babka)
* make 'orig_size' only depend on KMALLOC & STORE_USER flag
bits (Vlastimil Babka)
since v1:
* limit the 'orig_size' to kmalloc objects only, and save
it after track in metadata (Vlastimil Babka)
* fix a offset calculation problem in print_trailer
since RFC:
* fix problems in kmem_cache_alloc_bulk() and records sorting,
improve the print format (Hyeonggon Yoo)
* fix a compiling issue found by 0Day bot
* update the commit log based info from iova developers
Feng Tang (3):
mm/slub: only zero requested size of buffer for kzalloc when debug
enabled
mm: kasan: Extend kasan_metadata_size() to also cover in-object size
mm/slub: extend redzone check to extra allocated kmalloc space than
requested
include/linux/kasan.h | 5 ++--
mm/kasan/generic.c | 19 +++++++++----
mm/slab.c | 7 +++--
mm/slab.h | 22 +++++++++++++--
mm/slab_common.c | 4 +++
mm/slub.c | 65 +++++++++++++++++++++++++++++++++++++------
6 files changed, 100 insertions(+), 22 deletions(-)
--
2.34.1
Powered by blists - more mailing lists