[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <6CF29D3D-C930-4274-9BAC-365C0F32DF56@collabora.com>
Date: Mon, 28 Oct 2024 12:38:45 -0300
From: Daniel Almeida <daniel.almeida@...labora.com>
To: Abdiel Janulgue <abdiel.janulgue@...il.com>
Cc: rust-for-linux@...r.kernel.org,
a.hindborg@...nel.org,
linux-kernel@...r.kernel.org,
dakr@...hat.com,
airlied@...hat.com,
miguel.ojeda.sandonis@...il.com,
wedsonaf@...il.com,
Andreas Hindborg <a.hindborg@...sung.com>
Subject: Re: [PATCH 2/2] rust: add dma coherent allocator abstraction.
Hi Abdiel,
> On 23 Oct 2024, at 08:32, Abdiel Janulgue <abdiel.janulgue@...il.com> wrote:
>
> Add a simple dma coherent allocator rust abstraction which was based on
> Andreas Hindborg's dma abstractions from the rnvme driver.
>
> This version:
> - Does not introduce the unused dma pool functionality for now.
> - Represents the internal CPU buffer as a slice instead of using raw
> pointer reads and writes.
This patch is not a v2, so was anybody against using a raw pointer at some time?
> - Ensures both 32 and 64-bit DMA addressing works.
> - Make use of Result error-handling instead of Some.
>
> Co-developed-by: Wedson Almeida Filho <wedsonaf@...il.com>
> Signed-off-by: Wedson Almeida Filho <wedsonaf@...il.com>
> Co-developed-by: Andreas Hindborg <a.hindborg@...sung.com>
> Signed-off-by: Andreas Hindborg <a.hindborg@...sung.com>
> Signed-off-by: Abdiel Janulgue <abdiel.janulgue@...il.com>
> ---
> rust/kernel/dma.rs | 153 +++++++++++++++++++++++++++++++++++++++++++++
> rust/kernel/lib.rs | 1 +
> 2 files changed, 154 insertions(+)
> create mode 100644 rust/kernel/dma.rs
>
> diff --git a/rust/kernel/dma.rs b/rust/kernel/dma.rs
> new file mode 100644
> index 000000000000..8390b3a4e8aa
> --- /dev/null
> +++ b/rust/kernel/dma.rs
> @@ -0,0 +1,153 @@
> +// SPDX-License-Identifier: GPL-2.0
> +
> +//! Direct memory access (DMA).
> +//!
> +//! C header: [`include/linux/dma-mapping.h`](srctree/include/linux/dma-mapping.h)
> +
> +use crate::{
> + bindings,
> + device::Device,
> + error::code::*,
> + error::Result,
> + types::ARef,
> +};
> +
> +/// Abstraction of dma_alloc_coherent
> +///
> +/// # Invariants
> +///
> +/// For the lifetime of an instance of CoherentAllocation:
> +/// 1. The cpu address pointer is valid and is accessed with an index bounded within count.
> +/// 2. We hold a reference to the device.
> +pub struct CoherentAllocation<T: 'static> {
> + dev: ARef<Device>,
> + dma_handle: bindings::dma_addr_t,
> + count: usize,
> + cpu_addr: &'static mut [T],
> +}
Not sure why there’s ’static here. The lifetime of `cpu_addr` is the lifetime of the object.
This is why keeping a pointer and building the slice as needed is actually a better approach, IMHO.
That will correctly express the lifetime we want to enforce, i.e.:
```
pub fn cpu(&’a self) -> &’a mut [T];
```
Where ‘a is automatically filled in, of course.
Also, naming a slice as `cpu_addr` doesn’t sound very good, to be honest.
> +
> +impl<T> CoherentAllocation<T> {
> + /// Allocates a region of `size_of::<T> * count` of consistent memory.
> + ///
> + /// Returns a CoherentAllocation object which contains a pointer to the allocated region
> + /// (in the processor's virtual address space) and the device address which can be
> + /// given to the device as the DMA address base of the region. The region is released once
> + /// [`CoherentAllocation`] is dropped.
> + ///
> + /// # Examples
> + ///
> + /// ```
> + /// use kernel::device::Device;
> + /// use kernel::dma::CoherentAllocation;
> + ///
> + /// # fn dox(dev: &Device) -> Result<()> {
> + /// let c: CoherentAllocation<u64> = CoherentAllocation::alloc_coherent(dev, 4, GFP_KERNEL)?;
Have you considered ZSTs? What happens if someone writes down:
```
let c = CoherentAllocation<()> = …
```
This doesn’t really make sense and should be forbidden.
> + /// # Ok(()) }
> + /// ```
> + pub fn alloc_coherent(
> + dev: &Device,
> + count: usize,
> + flags: kernel::alloc::Flags,
> + ) -> Result<CoherentAllocation<T>> {
> + let t_size = core::mem::size_of::<T>();
> + let size = count.checked_mul(t_size).ok_or(EOVERFLOW)?;
> + let mut dma_handle = 0;
> + // SAFETY: device pointer is guaranteed as valid by invariant on `Device`.
> + // We ensure that we catch the failure on this function and throw an ENOMEM
> + let ret = unsafe {
> + bindings::dma_alloc_attrs(
> + dev.as_raw(),
> + size,
> + &mut dma_handle, flags.as_raw(),
> + 0,
> + )
> + };
> + if ret.is_null() {
> + return Err(ENOMEM)
> + }
I assume that ZSTs will simply return ENOMEM as per above, but that’s not quite
right either. The API should prevent this per-design instead of returning `Error`.
> +
> + Ok(Self {
> + dev: dev.into(),
> + dma_handle,
> + count,
> + // SAFETY: The raw buffer and size is valid from the checks we've made above.
> + cpu_addr: unsafe { core::slice::from_raw_parts_mut(ret as _, size) },
> + })
> + }
> +
> + /// Reads a value on a location specified by index.
> + pub fn read(&self, index: usize) -> Result<T>
> + where
> + T: Copy
> + {
> + if let Some(val) = self.cpu_addr.get(index) {
> + Ok(*val)
> + } else {
> + Err(EINVAL)
> + }
> + }
> +
> + /// Write a value on the memory location specified by index.
> + pub fn write(&mut self, index: usize, value: &T) -> Result
> + where
> + T: Copy,
> + {
> + if let Some(elem) = self.cpu_addr.get_mut(index) {
> + *elem = *value;
> + Ok(())
> + } else {
> + Err(EINVAL)
> + }
> + }
> +
> + /// Performs a read and then a write of a value on a location specified by index.
> + pub fn read_write(&mut self, index: usize, value: &T) -> Result<T>
> + where
> + T: Copy,
> + {
> + if let Some(elem) = self.cpu_addr.get_mut(index) {
> + let val = *elem;
> + *elem = *value;
> + Ok(val)
> + } else {
> + Err(EINVAL)
> + }
> + }
> +
> + /// Returns the base address to the allocated region and the dma handle.
> + /// Caller takes ownership of returned resources.
> + pub fn into_parts(self) -> (usize, bindings::dma_addr_t) {
> + let ret = (self.cpu_addr.as_mut_ptr() as _, self.dma_handle);
> + core::mem::forget(self);
> + ret
> + }
> +
> + /// Returns the base address to the allocated region in the CPU's virtual address space.
> + pub fn start_ptr(&self) -> *const T {
> + self.cpu_addr.as_ptr() as _
> + }
> +
> + /// Returns the base address to the allocated region in the CPU's virtual address space as
> + /// a mutable pointer.
> + pub fn start_ptr_mut(&mut self) -> *mut T {
> + self.cpu_addr.as_mut_ptr() as _
> + }
> +
> + /// Returns a DMA handle which may given to the device as the DMA address base of
> + /// the region.
> + pub fn dma_handle(&self) -> bindings::dma_addr_t {
> + self.dma_handle
> + }
> +}
> +
> +impl<T> Drop for CoherentAllocation<T> {
> + fn drop(&mut self) {
> + let size = self.count * core::mem::size_of::<T>();
> +
> + // SAFETY: the device, cpu address, and the dma handle is valid due to the
> + // type invariants on `CoherentAllocation`.
> + unsafe { bindings::dma_free_attrs(self.dev.as_raw(), size,
> + self.cpu_addr.as_mut_ptr() as _,
> + self.dma_handle, 0) }
> + }
> +}
> diff --git a/rust/kernel/lib.rs b/rust/kernel/lib.rs
> index b62451f64f6e..b713c92eb1ef 100644
> --- a/rust/kernel/lib.rs
> +++ b/rust/kernel/lib.rs
> @@ -32,6 +32,7 @@
> pub mod block;
> mod build_assert;
> pub mod device;
> +pub mod dma;
> pub mod error;
> #[cfg(CONFIG_RUST_FW_LOADER_ABSTRACTIONS)]
> pub mod firmware;
> --
> 2.43.0
>
>
Everything else looks good to me!
— Daniel
Powered by blists - more mailing lists