[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ce1c307e-b7ae-2590-7b2e-43cbe963bc4d@intel.com>
Date: Mon, 24 Apr 2023 15:46:30 +0200
From: Alexander Lobakin <aleksander.lobakin@...el.com>
To: Gong Ruiqi <gongruiqi1@...wei.com>
CC: Hyeonggon Yoo <42.hyeyoo@...il.com>,
Dennis Zhou <dennis@...nel.org>, Tejun Heo <tj@...nel.org>,
Christoph Lameter <cl@...ux.com>,
Pekka Enberg <penberg@...nel.org>,
David Rientjes <rientjes@...gle.com>,
Joonsoo Kim <iamjoonsoo.kim@....com>,
Andrew Morton <akpm@...ux-foundation.org>,
Vlastimil Babka <vbabka@...e.cz>,
Roman Gushchin <roman.gushchin@...ux.dev>,
Alexander Potapenko <glider@...gle.com>,
Marco Elver <elver@...gle.com>,
Dmitry Vyukov <dvyukov@...gle.com>, <linux-mm@...ck.org>,
<linux-kernel@...r.kernel.org>, <kasan-dev@...glegroups.com>,
Kees Cook <keescook@...omium.org>,
<linux-hardening@...r.kernel.org>,
Paul Moore <paul@...l-moore.com>,
<linux-security-module@...r.kernel.org>,
James Morris <jmorris@...ei.org>,
Wang Weiyang <wangweiyang2@...wei.com>,
Xiu Jianfeng <xiujianfeng@...wei.com>
Subject: Re: [PATCH RFC] Randomized slab caches for kmalloc()
From: Gong, Ruiqi <gongruiqi1@...wei.com>
Date: Mon, 24 Apr 2023 10:54:33 +0800
> Sorry for the late reply. I just came back from my paternity leave :)
>
> On 2023/04/05 23:15, Alexander Lobakin wrote:
>> From: Hyeonggon Yoo <42.hyeyoo@...il.com>
>> Date: Wed, 5 Apr 2023 21:26:47 +0900
>>
>>> ...
>>>
>>> I'm not yet sure if this feature is appropriate for mainline kernel.
>>>
>>> I have few questions:
>>>
>>> 1) What is cost of this configuration, in terms of memory overhead, or
>>> execution time?
>>>
>>>
>>> 2) The actual cache depends on caller which is static at build time, not
>>> runtime.
>>>
>>> What about using (caller ^ (some subsystem-wide random sequence)),
>>>
>>> which is static at runtime?
>>
>> Why can't we just do
>>
>> random_get_u32_below(CONFIG_RANDOM_KMALLOC_CACHES_NR)
>>
>> ?
>
> This makes the cache selection "dynamic", i.e. each kmalloc() will
> randomly pick a different cache at each time it's executed. The problem
> of this approach is that it only reduces the probability of the cache
> being sprayed by the attacker, and the attacker can bypass it by simply
> repeating the attack multiple times in a brute-force manner.
>
> Our proposal is to make the randomness be with respect to the code
> address rather than time, i.e. allocations in different code paths would
> most likely pick different caches, although kmalloc() at each place
> would use the same cache copy whenever it is executed. In this way, the
> code path that the attacker uses would most likely pick a different
> cache than which the targeted subsystem/driver would pick, which means
> in most of cases the heap spraying is unachievable.
Ah, I see now. Thanks for the explanation, made it really clear.
>
>> It's fast enough according to Jason... `_RET_IP_ % nr` doesn't sound
>> "secure" to me. It really is a compile-time constant, which can be
>> calculated (or not?) manually. Even if it wasn't, `% nr` doesn't sound
>> good, there should be at least hash_32().
>
> Yes, `_RET_IP_ % nr` is a bit naive. Currently the patch is more like a
> PoC so I wrote this. Indeed a proper hash function should be used here.
>
> And yes _RET_IP_ could somehow be manually determined especially for
> kernels without KASLR, and I think adding a per-boot random seed into
> the selection could solve this.
I recall how it is done for kCFI/FineIBT in the x86 code -- it also uses
per-boot random seed (although it gets patched into the code itself each
time, when applying alternatives). So probably should be optimal enough.
The only thing I'm wondering is where to store this per-boot seed :D
It's generic code, so you can't patch it directly. OTOH storing it in
.data/.bss can make it vulnerable to attacks... Can't it?
>
> I will implement these in v2. Thanks!
>
>>
>> Thanks,
>> Olek
>>
Thanks,
Olek
Powered by blists - more mailing lists