[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aPS0aTUUwDsXmHFN@google.com>
Date: Sun, 19 Oct 2025 09:50:33 +0000
From: Alice Ryhl <aliceryhl@...gle.com>
To: Danilo Krummrich <dakr@...nel.org>
Cc: gregkh@...uxfoundation.org, rafael@...nel.org, ojeda@...nel.org,
alex.gaynor@...il.com, boqun.feng@...il.com, gary@...yguo.net,
bjorn3_gh@...tonmail.com, lossin@...nel.org, a.hindborg@...nel.org,
tmgross@...ch.edu, mmaurer@...gle.com, rust-for-linux@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH 3/7] rust: debugfs: support for binary large objects
On Fri, Oct 17, 2025 at 04:37:48PM +0200, Danilo Krummrich wrote:
> On Fri Oct 17, 2025 at 2:59 PM CEST, Alice Ryhl wrote:
> > On Sat, Oct 04, 2025 at 12:26:40AM +0200, Danilo Krummrich wrote:
> >> Introduce support for read-only, write-only, and read-write binary files
> >> in Rust debugfs. This adds:
> >>
> >> - BinaryWriter and BinaryReader traits for writing to and reading from
> >> user slices in binary form.
> >> - New Dir methods: read_binary_file(), write_binary_file(),
> >> `read_write_binary_file`.
> >> - Corresponding FileOps implementations: BinaryReadFile,
> >> BinaryWriteFile, BinaryReadWriteFile.
> >>
> >> This allows kernel modules to expose arbitrary binary data through
> >> debugfs, with proper support for offsets and partial reads/writes.
> >>
> >> Signed-off-by: Danilo Krummrich <dakr@...nel.org>
> >
> >> +extern "C" fn blob_write<T: BinaryReader>(
> >> + file: *mut bindings::file,
> >> + buf: *const c_char,
> >> + count: usize,
> >> + ppos: *mut bindings::loff_t,
> >> +) -> isize {
> >> + // SAFETY:
> >> + // - `file` is a valid pointer to a `struct file`.
> >> + // - The type invariant of `FileOps` guarantees that `private_data` points to a valid `T`.
> >> + let this = unsafe { &*((*file).private_data.cast::<T>()) };
> >> +
> >> + // SAFETY: `ppos` is a valid `loff_t` pointer.
> >> + let pos = unsafe { &mut *ppos };
> >> +
> >> + let mut reader = UserSlice::new(UserPtr::from_ptr(buf.cast_mut().cast()), count).reader();
> >> +
> >> + let ret = || -> Result<isize> {
> >> + let offset = (*pos).try_into()?;
> >
> > So offsets larger than the buffer result in Ok(0) unless the offset
> > doesn't fit in an usize, in which case it's an error instead? I think we
> > should treat offsets that are too large in the same manner no matter
> > how large they are.
>
> The offset being larger than thhe buffer is fine, userspace has to try to read
> until the kernel indicates that there are no more bytes left to read by
> returning zero.
>
> But if the offset is larger than a usize there isn't a chance this can ever be
> successful in the first place, hence I'd consider this an error.
I don't really agree with this. Obviously we have to return Ok(0) if the
position is equal to the buffer size. But for positions strictly larger
than the buffer size I think it's reasonable to choose between Ok(0) or
an error. But please, let's be consistent about whether we return Ok(0)
or errors for positions larger than the buffer size.
Besides, it could succeed. Imagine a debugfs file whose contents aren't
stored in memory as a big byte array, but can be fetched / computed on
the fly based on any offset. Such a file could be larger than 4GB on a
32-bit machine no problem.
Alice
Powered by blists - more mailing lists