lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <73d7d53f-439b-44a9-98ca-0b1c8fbc1661@elijahs.space>
Date: Thu, 25 Sep 2025 10:20:13 -0700
From: Elijah <me@...jahs.space>
To: Danilo Krummrich <dakr@...nel.org>, Elijah Wright <git@...jahs.space>
Cc: Miguel Ojeda <ojeda@...nel.org>, Alex Gaynor <alex.gaynor@...il.com>,
 Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
 Björn Roy Baron <bjorn3_gh@...tonmail.com>,
 Benno Lossin <lossin@...nel.org>, Andreas Hindborg <a.hindborg@...nel.org>,
 Alice Ryhl <aliceryhl@...gle.com>, Trevor Gross <tmgross@...ch.edu>,
 rust-for-linux@...r.kernel.org, linux-kernel@...r.kernel.org,
 Lorenzo Stoakes <lorenzo.stoakes@...cle.com>,
 Vlastimil Babka <vbabka@...e.cz>, "Liam R. Howlett"
 <Liam.Howlett@...cle.com>, Uladzislau Rezki <urezki@...il.com>,
 linux-mm@...ck.org
Subject: Re: [PATCH] rust: slab: add basic slab module

I was thinking of maybe creating something like KBox for kmem_cache but 
I didn't want to touch allocator code yet, I figured I would just create 
the groundwork for that to exist. rbtree.rs uses KBox now but I'm not 
sure it should, at least if it's going to scale to many nodes

On 9/25/2025 2:54 AM, Danilo Krummrich wrote:
> What's the motivation?
> 
> I mean, we will need kmem_cache soon. But the users will all be drivers, e.g.
> the GPU drivers that people work on currently.
> 
> Drivers shouldn't use "raw" allocators (such as Kmalloc [1] or Vmalloc [2]), but
> the corresponding "managed" allocation primitives, such as KBox [3], VBox [4],
> KVec, etc.
> 
> Therefore, the code below shouldn't be used by drivers directly, hence the
> question for motivation.
> 
> In any case, kmem_cache is a special allocator (special as in it can have a
> non-static lifetime in contrast to other kernel allocators) and should be
> integrated with the existing infrastructure in rust/kernel/alloc/.
> 
> I think there are multiple options for that; (1) isn't really an option, but I
> think it's good to mention anyways:
> 
>    (1) Allow for non-zero sized implementations of the Allocator trait [3], such
>        that we can store a reference count to the KmemCache. This is necessary to
>        ensure that a Box<T, KmemCache> can't out-live the KmemCache itself.
> 
>        The reason why I said it's not really an option is because it discards the
>        option for dynamic dispatch of the generic Box type.
> 
>    (2) Same as (1), but with a custom Box type. This keeps dynamic dispatch for
>        the generic Box type (i.e. KBox, VBox, KVBox), but duplicates quite some
>        code and still doesn't allow for dynamic dispatch for the KmemCacheBox.
> 
>    (3) Implement a macro to generate a custom KmemCache Allocator trait
>        implementation for every KmemCache instance with a static lifetime.
> 
>        This makes KmemCache technically equivalent to the other allocators, such
>        as Kmalloc, etc. but obviously has the downside that the KmemCache might
>        live much longer than required.
> 
>        Technically, most KmemCache instances live for the whole module lifetime,
>        so it might be fine though.
> 
>        (This is what I think Alice proposed.)
> 
>    (4) Solve the problem on the C side and let kmem_cache_alloc() take care of
>        acquiring a reference count to the backing kmem_cache. The main question
>        here would be where to store the pointer for decreasing the reference
>        count on kmem_cache_free().
> 
>        Theoretically, it could be stored within the allocation itself, but it's a
>        bit of a yikes.
> 
>        However, it would resolve all the mentioned problems above.
> 
> I'd like to see (3) or (4), also depending on what the MM folks think.


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ