[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240805152004.5039-26-dakr@kernel.org>
Date: Mon, 5 Aug 2024 17:19:44 +0200
From: Danilo Krummrich <dakr@...nel.org>
To: ojeda@...nel.org,
alex.gaynor@...il.com,
wedsonaf@...il.com,
boqun.feng@...il.com,
gary@...yguo.net,
bjorn3_gh@...tonmail.com,
benno.lossin@...ton.me,
a.hindborg@...sung.com,
aliceryhl@...gle.com,
akpm@...ux-foundation.org
Cc: daniel.almeida@...labora.com,
faith.ekstrand@...labora.com,
boris.brezillon@...labora.com,
lina@...hilina.net,
mcanal@...lia.com,
zhiw@...dia.com,
acurrid@...dia.com,
cjia@...dia.com,
jhubbard@...dia.com,
airlied@...hat.com,
ajanulgu@...hat.com,
lyude@...hat.com,
linux-kernel@...r.kernel.org,
rust-for-linux@...r.kernel.org,
linux-mm@...ck.org,
Danilo Krummrich <dakr@...nel.org>
Subject: [PATCH v4 25/28] rust: alloc: implement `Cmalloc` in module allocator_test
So far the kernel's `Box` and `Vec` types can't be used by userspace
test cases, since all users of those types (e.g. `CString`) use kernel
allocators for instantiation.
In order to allow userspace test cases to make use of such types as
well, implement the `Cmalloc` allocator within the allocator_test module
and type alias all kernel allocators to `Cmalloc`. The `Cmalloc`
allocator uses libc's realloc() function as allocator backend.
Signed-off-by: Danilo Krummrich <dakr@...nel.org>
---
I know, having an `old_size` parameter would indeed help implementing `Cmalloc`.
However, I really don't want test infrastructure to influence the design of
kernel internal APIs.
Besides that, adding the `old_size` parameter would have the consequence that
we'd either need to honor it for kernel allocators too (which adds another
source of potential failure) or ignore it for all kernel allocators (which
potentially tricks people into taking wrong assumptions on how the API works).
---
rust/kernel/alloc/allocator_test.rs | 91 ++++++++++++++++++++++++++---
1 file changed, 84 insertions(+), 7 deletions(-)
diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs
index 1b2642c547ec..05fd75b89238 100644
--- a/rust/kernel/alloc/allocator_test.rs
+++ b/rust/kernel/alloc/allocator_test.rs
@@ -2,20 +2,97 @@
#![allow(missing_docs)]
-use super::{AllocError, Allocator, Flags};
+use super::{flags::*, AllocError, Allocator, Flags};
use core::alloc::Layout;
+use core::ptr;
use core::ptr::NonNull;
-pub struct Kmalloc;
+pub struct Cmalloc;
+pub type Kmalloc = Cmalloc;
pub type Vmalloc = Kmalloc;
pub type KVmalloc = Kmalloc;
-unsafe impl Allocator for Kmalloc {
+extern "C" {
+ #[link_name = "aligned_alloc"]
+ fn libc_aligned_alloc(align: usize, size: usize) -> *mut core::ffi::c_void;
+
+ #[link_name = "free"]
+ fn libc_free(ptr: *mut core::ffi::c_void);
+
+ // Do not use this function for production code! For test cases only it's
+ // probably fine if used with care.
+ #[link_name = "malloc_usable_size"]
+ fn libc_malloc_usable_size(ptr: *mut core::ffi::c_void) -> usize;
+}
+
+unsafe impl Allocator for Cmalloc {
+ fn alloc(layout: Layout, flags: Flags) -> Result<NonNull<[u8]>, AllocError> {
+ let layout = layout.pad_to_align();
+
+ // SAFETY: Returns either NULL or a pointer to a memory allocation that satisfies or
+ // exceeds the given size and alignment requirements.
+ let raw_ptr = unsafe { libc_aligned_alloc(layout.align(), layout.size()) } as *mut u8;
+
+ if flags.contains(__GFP_ZERO) && !raw_ptr.is_null() {
+ // SAFETY: `raw_ptr` points to memory successfully allocated with `libc_aligned_alloc`.
+ let size = unsafe { libc_malloc_usable_size(raw_ptr.cast()) };
+
+ // SAFETY: `raw_ptr` points to memory successfully allocated with `libc_aligned_alloc`
+ // of at least `size` bytes.
+ unsafe { core::ptr::write_bytes(raw_ptr, 0, size) };
+ }
+
+ let ptr = if layout.size() == 0 {
+ NonNull::dangling()
+ } else {
+ NonNull::new(raw_ptr).ok_or(AllocError)?
+ };
+
+ Ok(NonNull::slice_from_raw_parts(ptr, layout.size()))
+ }
+
unsafe fn realloc(
- _ptr: Option<NonNull<u8>>,
- _layout: Layout,
- _flags: Flags,
+ ptr: Option<NonNull<u8>>,
+ layout: Layout,
+ flags: Flags,
) -> Result<NonNull<[u8]>, AllocError> {
- panic!();
+ let layout = layout.pad_to_align();
+ let src: *mut u8 = if let Some(src) = ptr {
+ src.as_ptr().cast()
+ } else {
+ ptr::null_mut()
+ };
+
+ if layout.size() == 0 {
+ // SAFETY: `src` is either NULL or has previously been allocatored with this
+ // `Allocator`.
+ unsafe { libc_free(src.cast()) };
+
+ return Ok(NonNull::slice_from_raw_parts(NonNull::dangling(), 0));
+ }
+
+ let dst = Self::alloc(layout, flags)?;
+
+ if src.is_null() {
+ return Ok(dst);
+ }
+
+ // SAFETY: `src` is either NULL or has previously been allocatored with this `Allocator`.
+ let old_size = unsafe { libc_malloc_usable_size(src.cast()) };
+
+ // SAFETY: `src` has previously been allocated with this `Allocator`; `dst` has just been
+ // newly allocated. Taking the minimum of their sizes guarantees that we do not exceed
+ // either bounds.
+ unsafe {
+ // Always copy -- do not rely on potential spare memory reported by
+ // malloc_usable_size() which technically may still be sufficient.
+ ptr::copy_nonoverlapping(
+ src,
+ dst.as_ptr().cast(),
+ core::cmp::min(layout.size(), old_size),
+ )
+ };
+
+ Ok(dst)
}
}
--
2.45.2
Powered by blists - more mailing lists