[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250208014723.1514049-1-gongruiqi1@huawei.com>
Date: Sat, 8 Feb 2025 09:47:21 +0800
From: GONG Ruiqi <gongruiqi1@...wei.com>
To: Christoph Lameter <cl@...ux.com>, Pekka Enberg <penberg@...nel.org>, David
Rientjes <rientjes@...gle.com>, Joonsoo Kim <iamjoonsoo.kim@....com>, Andrew
Morton <akpm@...ux-foundation.org>, Vlastimil Babka <vbabka@...e.cz>, Kees
Cook <kees@...nel.org>
CC: Tamas Koczka <poprdi@...gle.com>, Roman Gushchin
<roman.gushchin@...ux.dev>, Hyeonggon Yoo <42.hyeyoo@...il.com>, Xiu Jianfeng
<xiujianfeng@...wei.com>, <linux-mm@...ck.org>,
<linux-hardening@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<gongruiqi1@...wei.com>
Subject: [PATCH v2 0/2] Refine kmalloc caches randomization in kvmalloc
Hi,
v2: change the implementation as Vlastimil suggested
v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/
Tamás reported [1] that kmalloc cache randomization doesn't actually
work for those kmalloc invoked via kvmalloc. For more details, see the
commit log of patch 2.
The current solution requires a direct call from __kvmalloc_node_noprof
to __do_kmalloc_node, a static function in a different .c file.
Comparing to v1, this version achieves this by simply moving
__kvmalloc_node_noprof to mm/slub.c, as suggested by Vlastimil [2].
Link: https://github.com/google/security-research/pull/83/files#diff-1604319b55a48c39a210ee52034ed7ff5b9cdc3d704d2d9e34eb230d19fae235R200 [1]
Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]
GONG Ruiqi (2):
slab: Adjust placement of __kvmalloc_node_noprof
slab: Achieve better kmalloc caches randomization in kvmalloc
include/linux/slab.h | 22 +++++++++
mm/slub.c | 90 ++++++++++++++++++++++++++++++++++
mm/util.c | 112 -------------------------------------------
3 files changed, 112 insertions(+), 112 deletions(-)
--
2.25.1
Powered by blists - more mailing lists