[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <ea75290d-5fad-4785-8790-408877d846e2@suse.cz>
Date: Wed, 12 Feb 2025 16:12:43 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: GONG Ruiqi <gongruiqi1@...wei.com>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, Kees Cook <kees@...nel.org>
Cc: Tamas Koczka <poprdi@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, Xiu Jianfeng <xiujianfeng@...wei.com>,
linux-mm@...ck.org, linux-hardening@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3 0/2] Refine kmalloc caches randomization in kvmalloc
On 2/12/25 09:15, GONG Ruiqi wrote:
> Hi,
>
> v3:
> - move all the way from kmalloc_gfp_adjust to kvrealloc_noprof into
> mm/slub.c
> - some rewording for commit logs
> v2: https://lore.kernel.org/all/20250208014723.1514049-1-gongruiqi1@huawei.com/
> - change the implementation as Vlastimil suggested
> v1: https://lore.kernel.org/all/20250122074817.991060-1-gongruiqi1@huawei.com/
>
> Tamás reported [1] that kmalloc cache randomization doesn't actually
> work for those kmalloc invoked via kvmalloc. For more details, see the
> commit log of patch 2.
>
> The current solution requires a direct call from __kvmalloc_node_noprof
> to __do_kmalloc_node, a static function in a different .c file. As
> suggested by Vlastimil [2], it's achieved by simply moving
> __kvmalloc_node_noprof from mm/util.c to mm/slub.c, together with some
> other functions of the same family.
>
> Link: https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md?plain=1#L259 [1]
> Link: https://lore.kernel.org/all/62044279-0c56-4185-97f7-7afac65ff449@suse.cz/ [2]
>
> GONG Ruiqi (2):
> slab: Adjust placement of __kvmalloc_node_noprof
> slab: Achieve better kmalloc caches randomization in kvmalloc
Applied to slab/for-next, thanks!
>
> mm/slub.c | 162 ++++++++++++++++++++++++++++++++++++++++++++++++++++++
> mm/util.c | 162 ------------------------------------------------------
> 2 files changed, 162 insertions(+), 162 deletions(-)
>
Powered by blists - more mailing lists