[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <DC5IF4AUVJP6.5B68OUOXQLCM@kernel.org>
Date: Mon, 18 Aug 2025 13:16:55 +0200
From: "Danilo Krummrich" <dakr@...nel.org>
To: "Alice Ryhl" <aliceryhl@...gle.com>
Cc: <akpm@...ux-foundation.org>, <ojeda@...nel.org>,
<alex.gaynor@...il.com>, <boqun.feng@...il.com>, <gary@...yguo.net>,
<bjorn3_gh@...tonmail.com>, <lossin@...nel.org>, <a.hindborg@...nel.org>,
<tmgross@...ch.edu>, <abdiel.janulgue@...il.com>, <acourbot@...dia.com>,
<jgg@...pe.ca>, <lyude@...hat.com>, <robin.murphy@....com>,
<daniel.almeida@...labora.com>, <rust-for-linux@...r.kernel.org>,
<linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/4] rust: scatterlist: Add type-state abstraction for
sg_table
On Mon Aug 18, 2025 at 11:52 AM CEST, Alice Ryhl wrote:
> On Fri, Aug 15, 2025 at 07:10:03PM +0200, Danilo Krummrich wrote:
>> +/// A single entry in a scatter-gather list.
>> +///
>> +/// An `SGEntry` represents a single, physically contiguous segment of memory that has been mapped
>> +/// for DMA.
>> +///
>> +/// Instances of this struct are obtained by iterating over an [`SGTable`]. Drivers do not create
>> +/// or own [`SGEntry`] objects directly.
>> +#[repr(transparent)]
>> +pub struct SGEntry(Opaque<bindings::scatterlist>);
>
> Send/Sync?
I think SGEntry doesn't need it, but both would be valid.
For the other types, I simply forgot to impl Sync.
>> + /// Returns the DMA address of this SG entry.
>> + ///
>> + /// This is the address that the device should use to access the memory segment.
>> + pub fn dma_address(&self) -> bindings::dma_addr_t {
>
> We might want a typedef on the Rust side for dma_addr_t, like we already
> have for the phys_addr_t/resource_size_t.
Yes, the idea was to add that with a subsequent patch, dma.rs already leaks
bindings::dma_addr_t and I didn't want to mix up this series with this cleanup.
>> + // SAFETY: `self.as_raw()` is a valid pointer to a `struct scatterlist`.
>> + unsafe { bindings::sg_dma_address(self.as_raw()) }
>> + }
>> +
>> + /// Returns the length of this SG entry in bytes.
>> + pub fn dma_len(&self) -> u32 {
>
> Is u32 really the right length type?
The C type uses unsigned int unfortunately, and SG entries larger than u32
probably don't make sense.
Formally, bus addresses and hence this size, can exceed size_t. However, it
obviously makes no sense for this to happen, so size_t would be a reasonable
choice. ressource_size_t would be resonable too.
>> +/// # Invariants
>> +///
>> +/// `sgt` is a valid pointer to a `struct sg_table` for the entire lifetime of an [`DmaMapSgt`].
>
> I think we probably want an invariant for why it's safe to call
> dma_unmap_sgtable in Drop.
I assume you're suggesting that the invariant should say that the SG table is
always mapped? And that we should add a safety requirement to DmaMapSgt::new()
that the caller must guarantee that the SG table is never unmapped other than by
dropping DmaMapSgt?
If so, that sounds reasonable.
>> +struct DmaMapSgt {
>> + sgt: NonNull<bindings::sg_table>,
>> + dev: ARef<Device>,
>> + dir: dma::DataDirection,
>> +}
<snip>
>> +impl RawSGTable {
>> + fn new(
>> + mut pages: KVec<*mut bindings::page>,
>> + size: usize,
>> + max_segment: u32,
>> + flags: alloc::Flags,
>> + ) -> impl PinInit<Self, Error> {
>> + try_pin_init!(Self {
>> + sgt <- Opaque::try_ffi_init(|slot: *mut bindings::sg_table| {
>> + // `sg_alloc_table_from_pages_segment()` expects at least one page, otherwise it
>> + // produces a NPE.
>> + if pages.is_empty() {
>> + return Err(EINVAL);
>> + }
>> +
>> + // SAFETY:
>> + // - `slot` is a valid pointer to uninitialized memory.
>> + // - As by the check above, `pages` is not empty.
>> + error::to_result(unsafe {
>> + bindings::sg_alloc_table_from_pages_segment(
>> + slot,
>> + pages.as_mut_ptr(),
>> + pages.len().try_into()?,
>
> The `pages` vector is dropped immediately after this call to
> sg_alloc_table_from_pages_segment. Is that ok?
Yes, it's only needed during sg_alloc_table_from_pages_segment().
> If it's ok, then I would change `pages` to `&[*mut page]` so that the
> caller can manage the allocation of the array.
We can immediately drop it after sg_alloc_table_from_pages_segmen(), so why do
we want the caller to take care of the lifetime?
>> +impl<P> Owned<P>
>> +where
>> + for<'a> P: page::AsPageIter<Iter<'a> = VmallocPageIter<'a>> + 'static,
>
> If you specifically require the iterator type to be VmallocPageIter,
> then I would hard-code that in the trait instead of specifying it here.
I do not follow, hard-code in which trait?
> But I think you just want `P: AsPageIter`.
Yeah, I thought for now it's probably good enough to require VmallocPageIter and
revisit once we get more implementors of AsPageIter, but I think we can also do
it right away.
I think we'd need a trait PageIterator, which implements page_count(), since we
technically can't rely on Iterator::size_hint(). Though, in this case I think we
can also just make AsPageIter::Iter: ExactSizeIterator?
>> +{
>> + fn new(
>> + dev: &Device<Bound>,
>> + mut pages: P,
>> + dir: dma::DataDirection,
>> + flags: alloc::Flags,
>> + ) -> Result<impl PinInit<Self, Error> + use<'_, P>> {
>
> We would probably want to move the logic into the initializer so that we
> don't have the double Result here.
That'd be nice, but I think it's not possible.
We can't borrow from pages in the initializer closure while at the same time
store pages with another initializer, can we?
Either way, it's not that big a deal I think, since this constructor is not
exposed to the outside world. Which is also why it didn't bother me too much.
>> + let page_iter = pages.page_iter();
>> + let size = page_iter.size();
>> +
>> + let mut page_vec: KVec<*mut bindings::page> =
>> + KVec::with_capacity(page_iter.page_count(), flags)?;
>> +
>> + for page in page_iter {
>> + page_vec.push(page.as_ptr(), flags)?;
>> + }
>> +
>> + // `dma_max_mapping_size` returns `size_t`, but `sg_alloc_table_from_pages_segment()` takes
>> + // an `unsigned int`.
>> + let max_segment = {
>> + // SAFETY: `dev.as_raw()` is a valid pointer to a `struct device`.
>> + let size = unsafe { bindings::dma_max_mapping_size(dev.as_raw()) };
>> + if size == 0 {
>> + u32::MAX
>> + } else {
>> + size.min(u32::MAX as usize) as u32
>
> u32::try_from(size).unwrap_or(u32::MAX)
>
>> + }
>> + };
>> +
>> + Ok(try_pin_init!(&this in Self {
>> + sgt <- RawSGTable::new(page_vec, size, max_segment, flags),
>> + dma <- {
>> + // SAFETY: `this` is a valid pointer to uninitialized memory.
>> + let sgt = unsafe { &raw mut (*this.as_ptr()).sgt }.cast();
>> +
>> + // SAFETY: `sgt` is guaranteed to be non-null.
>> + let sgt = unsafe { NonNull::new_unchecked(sgt) };
>> +
>> + // SAFETY: It is guaranteed that the object returned by `DmaMapSgt::new` won't
>> + // out-live `sgt`.
>> + Devres::new(dev, unsafe { DmaMapSgt::new(sgt, dev, dir) })
>> + },
>> + _pages: pages,
>> + }))
>> + }
>> +}
Powered by blists - more mailing lists