lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250109133057.243751-3-daniel.almeida@collabora.com>
Date: Thu,  9 Jan 2025 10:30:54 -0300
From: Daniel Almeida <daniel.almeida@...labora.com>
To: alex.gaynor@...il.com,
	boqun.feng@...il.com,
	gary@...yguo.net,
	bjorn3_gh@...tonmail.com,
	benno.lossin@...ton.me,
	a.hindborg@...nel.org,
	aliceryhl@...gle.com,
	tmgross@...ch.edu,
	gregkh@...uxfoundation.org,
	rafael@...nel.org,
	dakr@...nel.org,
	boris.brezillon@...labora.com
Cc: Daniel Almeida <daniel.almeida@...labora.com>,
	rust-for-linux@...r.kernel.org,
	linux-kernel@...r.kernel.org
Subject: [PATCH v4 2/3] rust: io: mem: add a generic iomem abstraction

Add a generic iomem abstraction to safely read and write ioremapped
regions.

The reads and writes are done through IoRaw, and are thus checked either
at compile-time, if the size of the region is known at that point, or at
runtime otherwise.

Non-exclusive access to the underlying memory region is made possible to
cater to cases where overlapped regions are unavoidable.

Signed-off-by: Daniel Almeida <daniel.almeida@...labora.com>
---
 rust/helpers/io.c     |  10 ++++
 rust/kernel/io.rs     |   1 +
 rust/kernel/io/mem.rs | 108 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 119 insertions(+)
 create mode 100644 rust/kernel/io/mem.rs

diff --git a/rust/helpers/io.c b/rust/helpers/io.c
index 3cb47bd01942..cb10060c08ae 100644
--- a/rust/helpers/io.c
+++ b/rust/helpers/io.c
@@ -106,3 +106,13 @@ resource_size_t rust_helper_resource_size(struct resource *res)
 	return resource_size(res);
 }
 
+struct resource *rust_helper_request_mem_region(resource_size_t start, resource_size_t n,
+				    const char *name)
+{
+	return request_mem_region(start, n, name);
+}
+
+void rust_helper_release_mem_region(resource_size_t start, resource_size_t n)
+{
+	release_mem_region(start, n);
+}
diff --git a/rust/kernel/io.rs b/rust/kernel/io.rs
index 566d8b177e01..9ce3482b5ecd 100644
--- a/rust/kernel/io.rs
+++ b/rust/kernel/io.rs
@@ -7,6 +7,7 @@
 use crate::error::{code::EINVAL, Result};
 use crate::{bindings, build_assert};
 
+pub mod mem;
 pub mod resource;
 
 /// Raw representation of an MMIO region.
diff --git a/rust/kernel/io/mem.rs b/rust/kernel/io/mem.rs
new file mode 100644
index 000000000000..f2147db715bf
--- /dev/null
+++ b/rust/kernel/io/mem.rs
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0
+
+//! Generic memory-mapped IO.
+
+use core::ops::Deref;
+
+use crate::io::resource::Resource;
+use crate::io::Io;
+use crate::io::IoRaw;
+use crate::prelude::*;
+
+/// A generic memory-mapped IO region.
+///
+/// Accesses to the underlying region is checked either at compile time, if the
+/// region's size is known at that point, or at runtime otherwise.
+///
+/// Whether `IoMem` represents an exclusive access to the underlying memory
+/// region is determined by the caller at creation time, as overlapping access
+/// may be needed in some cases.
+///
+/// # Invariants
+///
+/// `IoMem` always holds an `IoRaw` inststance that holds a valid pointer to the
+/// start of the I/O memory mapped region and its size.
+pub struct IoMem<const SIZE: usize = 0> {
+    io: IoRaw<SIZE>,
+    res_start: u64,
+    exclusive: bool,
+}
+
+impl<const SIZE: usize> IoMem<SIZE> {
+    /// Creates a new `IoMem` instance.
+    ///
+    /// `exclusive` determines whether the memory region should be exclusively
+    ///
+    /// # Safety
+    ///
+    /// The caller must ensure that the underlying resource remains valid
+    /// throughout the `IoMem`'s lifetime. This is usually done by wrapping the
+    /// `IoMem` in a `Devres` instance, which will properly revoke the access
+    /// when the device is unbound from the matched driver.
+    pub(crate) unsafe fn new(resource: &Resource, exclusive: bool) -> Result<Self> {
+        let size = resource.size();
+        if size == 0 {
+            return Err(ENOMEM);
+        }
+
+        let res_start = resource.start();
+
+        if exclusive {
+            // SAFETY:
+            // - `res_start` and `size` are read from a presumably valid `struct resource`.
+            // - `size` is known not to be zero at this point.
+            // - `resource.name()` returns a valid C string.
+            let mem_region = unsafe {
+                bindings::request_mem_region(res_start, size, resource.name().as_char_ptr())
+            };
+
+            if mem_region.is_null() {
+                return Err(EBUSY);
+            }
+        }
+
+        // SAFETY:
+        // - `res_start` and `size` are read from a presumably valid `struct resource`.
+        // - `size` is known not to be zero at this point.
+        let addr = unsafe { bindings::ioremap(res_start, size as usize) };
+        if addr.is_null() {
+            if exclusive {
+                // SAFETY:
+                // - `res_start` and `size` are read from a presumably valid `struct resource`.
+                // - `size` is the same as the one passed to `request_mem_region`.
+                unsafe { bindings::release_mem_region(res_start, size) };
+            }
+            return Err(ENOMEM);
+        }
+
+        let io = IoRaw::new(addr as usize, size as usize)?;
+
+        Ok(IoMem {
+            io,
+            res_start,
+            exclusive,
+        })
+    }
+}
+
+impl<const SIZE: usize> Drop for IoMem<SIZE> {
+    fn drop(&mut self) {
+        if self.exclusive {
+            // SAFETY: `res_start` and `io.maxsize()` were the values passed to
+            // `request_mem_region`.
+            unsafe { bindings::release_mem_region(self.res_start, self.io.maxsize() as u64) }
+        }
+
+        // SAFETY: Safe as by the invariant of `Io`.
+        unsafe { bindings::iounmap(self.io.addr() as *mut core::ffi::c_void) }
+    }
+}
+
+impl<const SIZE: usize> Deref for IoMem<SIZE> {
+    type Target = Io<SIZE>;
+
+    fn deref(&self) -> &Self::Target {
+        // SAFETY: Safe as by the invariant of `IoMem`.
+        unsafe { Io::from_raw(&self.io) }
+    }
+}
-- 
2.47.1


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ