lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240704170738.3621-8-dakr@redhat.com>
Date: Thu,  4 Jul 2024 19:06:35 +0200
From: Danilo Krummrich <dakr@...hat.com>
To: ojeda@...nel.org,
	alex.gaynor@...il.com,
	wedsonaf@...il.com,
	boqun.feng@...il.com,
	gary@...yguo.net,
	bjorn3_gh@...tonmail.com,
	benno.lossin@...ton.me,
	a.hindborg@...sung.com,
	aliceryhl@...gle.com
Cc: daniel.almeida@...labora.com,
	faith.ekstrand@...labora.com,
	boris.brezillon@...labora.com,
	lina@...hilina.net,
	mcanal@...lia.com,
	zhiw@...dia.com,
	acurrid@...dia.com,
	cjia@...dia.com,
	jhubbard@...dia.com,
	airlied@...hat.com,
	ajanulgu@...hat.com,
	lyude@...hat.com,
	linux-kernel@...r.kernel.org,
	rust-for-linux@...r.kernel.org,
	Danilo Krummrich <dakr@...hat.com>
Subject: [PATCH 07/20] rust: alloc: implement `Vmalloc` allocator

Implement `Allocator` for `Vmalloc`, the kernel's virtually contiguous
allocator, typically used for larger objects, (much) larger than page
size.

All memory allocations made with `Vmalloc` end up in
`__vmalloc_noprof()`; all frees in `vfree()`.

Signed-off-by: Danilo Krummrich <dakr@...hat.com>
---
 rust/bindings/bindings_helper.h     |  1 +
 rust/kernel/alloc/allocator.rs      | 55 +++++++++++++++++++++++++++++
 rust/kernel/alloc/allocator_test.rs |  1 +
 3 files changed, 57 insertions(+)

diff --git a/rust/bindings/bindings_helper.h b/rust/bindings/bindings_helper.h
index ddb5644d4fd9..f10518045c16 100644
--- a/rust/bindings/bindings_helper.h
+++ b/rust/bindings/bindings_helper.h
@@ -15,6 +15,7 @@
 #include <linux/refcount.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
+#include <linux/vmalloc.h>
 #include <linux/wait.h>
 #include <linux/workqueue.h>
 
diff --git a/rust/kernel/alloc/allocator.rs b/rust/kernel/alloc/allocator.rs
index 1860cb79b875..0a4f27c5c3a6 100644
--- a/rust/kernel/alloc/allocator.rs
+++ b/rust/kernel/alloc/allocator.rs
@@ -16,6 +16,12 @@
 /// `bindings::krealloc`.
 pub struct Kmalloc;
 
+/// The virtually contiguous kernel allocator.
+///
+/// The vmalloc allocator allocates pages from the page level allocator and maps them into the
+/// contiguous kernel virtual space.
+pub struct Vmalloc;
+
 /// Returns a proper size to alloc a new object aligned to `new_layout`'s alignment.
 fn aligned_size(new_layout: Layout) -> usize {
     // Customized layouts from `Layout::from_size_align()` can have size < align, so pad first.
@@ -112,6 +118,55 @@ unsafe fn alloc_zeroed(&self, layout: Layout) -> *mut u8 {
     }
 }
 
+unsafe impl Allocator for Vmalloc {
+    unsafe fn realloc(
+        &self,
+        src: *mut u8,
+        old_size: usize,
+        layout: Layout,
+        flags: Flags,
+    ) -> Result<NonNull<[u8]>, AllocError> {
+        let mut size = aligned_size(layout);
+
+        let dst = if size == 0 {
+            // SAFETY: `src` is guaranteed to be previously allocated with this `Allocator` or NULL.
+            unsafe { bindings::vfree(src.cast()) };
+            NonNull::dangling()
+        } else if size <= old_size {
+            size = old_size;
+            NonNull::new(src).ok_or(AllocError)?
+        } else {
+            // SAFETY: `src` is guaranteed to point to valid memory with a size of at least
+            // `old_size`, which was previously allocated with this `Allocator` or NULL.
+            let dst = unsafe { bindings::__vmalloc_noprof(size as u64, flags.0) };
+
+            // Validate that we actually allocated the requested memory.
+            let dst = NonNull::new(dst.cast()).ok_or(AllocError)?;
+
+            if !src.is_null() {
+                // SAFETY: `src` is guaranteed to point to valid memory with a size of at least
+                // `old_size`; `dst` is guaranteed to point to valid memory with a size of at least
+                // `size`.
+                unsafe {
+                    core::ptr::copy_nonoverlapping(
+                        src,
+                        dst.as_ptr(),
+                        core::cmp::min(old_size, size),
+                    )
+                };
+
+                // SAFETY: `src` is guaranteed to be previously allocated with this `Allocator` or
+                // NULL.
+                unsafe { bindings::vfree(src.cast()) }
+            }
+
+            dst
+        };
+
+        Ok(NonNull::slice_from_raw_parts(dst, size))
+    }
+}
+
 #[global_allocator]
 static ALLOCATOR: Kmalloc = Kmalloc;
 
diff --git a/rust/kernel/alloc/allocator_test.rs b/rust/kernel/alloc/allocator_test.rs
index 3a0abe65491d..b2d7db492ba6 100644
--- a/rust/kernel/alloc/allocator_test.rs
+++ b/rust/kernel/alloc/allocator_test.rs
@@ -7,6 +7,7 @@
 use core::ptr::NonNull;
 
 pub struct Kmalloc;
+pub type Vmalloc = Kmalloc;
 
 unsafe impl Allocator for Kmalloc {
     unsafe fn realloc(
-- 
2.45.2


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ