[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20220829075618.69069-1-feng.tang@intel.com>
Date: Mon, 29 Aug 2022 15:56:14 +0800
From: Feng Tang <feng.tang@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>,
Dmitry Vyukov <dvyukov@...gle.com>
Cc: Dave Hansen <dave.hansen@...el.com>, linux-mm@...ck.org,
linux-kernel@...r.kernel.org, Feng Tang <feng.tang@...el.com>
Subject: [PATCH v4 0/4] mm/slub: some debug enhancements for kmalloc objects
kmalloc's API family is critical for mm, and one of its nature is that
it will round up the request size to a fixed one (mostly power of 2).
When user requests memory for '2^n + 1' bytes, actually 2^(n+1) bytes
could be allocated, so in worst case, there is around 50% memory space
waste.
The wastage is not a big issue for requests that get allocated/freed
quickly, but may cause problems with objects that have longer life time,
and there were some OOM cases in some extrem cases.
This patchset tries to :
* Add a debug method to track each kmalloced object's wastage info,
and show the call stack of original allocation (depends on
SLAB_STORE_USER flag)
* Extend the redzone sanity check to the extra kmalloced buffer than
requested, to better detect un-legitimate access to it. (depends
on SLAB_STORE_USER & SLAB_RED_ZONE)
The redzone part has been tested with code below:
for (shift = 3; shift <= 12; shift++) {
size = 1 << shift;
buf = kmalloc(size + 4, GFP_KERNEL);
/* We have 96, 196 kmalloc size, which is not power of 2 */
if (size == 64 || size == 128)
oob_size = 16;
else
oob_size = size - 4;
memset(buf + size + 4, 0xee, oob_size);
kfree(buf);
}
Please help to review, thanks!
- Feng
---
Changelogs:
since v3:
* rebase against latest post 6.0-rc1 slab tree's 'for-next' branch.
* fix a bug reported by 0Day, that kmalloc-redzoned data and kasan's
free meta data overlaps in the same kmalloc object data area
since v2:
* rebase against slab tree's 'for-next' branch
* fix pointer handling (Kefeng Wang)
* move kzalloc zeroing handling change to a separate patch (Vlastimil Babka)
* make 'orig_size' only depend on KMALLOC & STORE_USER flag
bits (Vlastimil Babka)
since v1:
* limit the 'orig_size' to kmalloc objects only, and save
it after track in metadata (Vlastimil Babka)
* fix a offset calculation problem in print_trailer
since RFC:
* fix problems in kmem_cache_alloc_bulk() and records sorting,
improve the print format (Hyeonggon Yoo)
* fix a compiling issue found by 0Day bot
* update the commit log based info from iova developers
Feng Tang (4):
mm/slub: enable debugging memory wasting of kmalloc
mm/slub: only zero the requested size of buffer for kzalloc
mm: kasan: Add free_meta size info in struct kasan_cache
mm/slub: extend redzone check to cover extra allocated kmalloc space
than requested
include/linux/kasan.h | 2 +
include/linux/slab.h | 2 +
mm/kasan/common.c | 2 +
mm/slab.c | 6 +-
mm/slab.h | 13 +++-
mm/slab_common.c | 4 +
mm/slub.c | 168 +++++++++++++++++++++++++++++++++++++-----
7 files changed, 172 insertions(+), 25 deletions(-)
--
2.34.1
Powered by blists - more mailing lists