[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <d50c3fae-1c2c-42b1-b622-fcd89c6b2dd3@suse.cz>
Date: Mon, 10 Feb 2025 11:05:35 +0100
From: Vlastimil Babka <vbabka@...e.cz>
To: GONG Ruiqi <gongruiqi1@...wei.com>, Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>, David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>, Kees Cook <kees@...nel.org>
Cc: Tamas Koczka <poprdi@...gle.com>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Hyeonggon Yoo <42.hyeyoo@...il.com>, Xiu Jianfeng <xiujianfeng@...wei.com>,
linux-mm@...ck.org, linux-hardening@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] slab: Achieve better kmalloc caches randomization
in kvmalloc
On 2/8/25 02:47, GONG Ruiqi wrote:
> As revealed by this writeup[1], due to the fact that __kmalloc_node (now
> renamed to __kmalloc_node_noprof) is an exported symbol and will never
> get inlined, using it in kvmalloc_node (now is __kvmalloc_node_noprof)
> would make the RET_IP inside always point to the same address:
>
> upper_caller
> kvmalloc
> kvmalloc_node
> kvmalloc_node_noprof
> __kvmalloc_node_noprof <-- all macros all the way down here
> __kmalloc_node_noprof
> __do_kmalloc_node(.., _RET_IP_)
> ... <-- _RET_IP_ points to
>
> That literally means all kmalloc invoked via kvmalloc would use the same
> seed for cache randomization (CONFIG_RANDOM_KMALLOC_CACHES), which makes
> this hardening unfunctional.
non-functional?
> The root cause of this problem, IMHO, is that using RET_IP only cannot
> identify the actual allocation site in case of kmalloc being called
> inside wrappers or helper functions.
inside non-inlined wrappers... ?
> And I believe there could be
> similar cases in other functions. Nevertheless, I haven't thought of
> any good solution for this. So for now let's solve this specific case
> first.
Yeah it's the similar problem with shared allocation wrappers as what
allocation tagging has.
> For __kvmalloc_node_noprof, replace __kmalloc_node_noprof and call
> __do_kmalloc_node directly instead, so that RET_IP can take the return
> address of kvmalloc and differentiate each kvmalloc invocation:
>
> upper_caller
> kvmalloc
> kvmalloc_node
> kvmalloc_node_noprof
> __kvmalloc_node_noprof <-- all macros all the way down here
> __do_kmalloc_node(.., _RET_IP_)
> ... <-- _RET_IP_ points to
>
> Thanks to Tamás Koczka for the report and discussion!
>
> Link: https://github.com/google/security-research/pull/83/files#diff-1604319b55a48c39a210ee52034ed7ff5b9cdc3d704d2d9e34eb230d19fae235R200 [1]
This should be slightly better? A permalink for the file itself:
https://github.com/google/security-research/blob/908d59b573960dc0b90adda6f16f7017aca08609/pocs/linux/kernelctf/CVE-2024-27397_mitigation/docs/exploit.md
Thanks.
> Reported-by: Tamás Koczka <poprdi@...gle.com>
> Signed-off-by: GONG Ruiqi <gongruiqi1@...wei.com>
> ---
> mm/slub.c | 6 +++---
> 1 file changed, 3 insertions(+), 3 deletions(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index 0830894bb92c..46e884b77dca 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -4903,9 +4903,9 @@ void *__kvmalloc_node_noprof(DECL_BUCKET_PARAMS(size, b), gfp_t flags, int node)
> * It doesn't really make sense to fallback to vmalloc for sub page
> * requests
> */
> - ret = __kmalloc_node_noprof(PASS_BUCKET_PARAMS(size, b),
> - kmalloc_gfp_adjust(flags, size),
> - node);
> + ret = __do_kmalloc_node(size, PASS_BUCKET_PARAM(b),
> + kmalloc_gfp_adjust(flags, size),
> + node, _RET_IP_);
> if (ret || size <= PAGE_SIZE)
> return ret;
>
Powered by blists - more mailing lists