lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Tue, 17 Oct 2023 09:13:46 -0700
From: Boqun Feng <boqun.feng@...il.com>
To: Benno Lossin <benno.lossin@...ton.me>
Cc: Greg KH <gregkh@...uxfoundation.org>, Andrew Lunn <andrew@...n.ch>,
	FUJITA Tomonori <fujita.tomonori@...il.com>, netdev@...r.kernel.org,
	rust-for-linux@...r.kernel.org, miguel.ojeda.sandonis@...il.com,
	tmgross@...ch.edu, wedsonaf@...il.com
Subject: Re: [PATCH net-next v4 1/4] rust: core abstractions for network PHY
 drivers

On Tue, Oct 17, 2023 at 02:32:07PM +0000, Benno Lossin wrote:
> On 17.10.23 16:21, Greg KH wrote:
> > On Tue, Oct 17, 2023 at 02:04:33PM +0000, Benno Lossin wrote:
> >> On 17.10.23 14:38, Andrew Lunn wrote:
> >>>>> Because set_speed() updates the member in phy_device and read()
> >>>>> updates the object that phy_device points to?
> >>>>
> >>>> `set_speed` is entirely implemented on the Rust side and is not protected
> >>>> by a lock.
> >>>
> >>> With the current driver, all entry points into the driver are called
> >>> from the phylib core, and the core guarantees that the lock is
> >>> taken. So it should not matter if its entirely implemented in the Rust
> >>> side, somewhere up the call stack, the lock was taken.
> >>
> >> Sure that might be the case, I am trying to guard against this future
> >> problem:
> >>
> >>       fn soft_reset(driver: &mut Driver) -> Result {
> >>           let driver = driver
> >>           thread::scope(|s| {
> >>               let thread_a = s.spawn(|| {
> >>                   for _ in 0..100_000_000 {
> >>                       driver.set_speed(10);
> >>                   }
> >>               });
> >>               let thread_b = s.spawn(|| {
> >>                   for _ in 0..100_000_000 {
> >>                       driver.set_speed(10);
> >>                   }
> >>               });
> >>               thread_a.join();
> >>               thread_b.join();
> >>           });
> >>           Ok(())
> >>       }
> >>
> >> This code spawns two new threads both of which can call `set_speed`,
> >> since it takes `&self`. But this leads to a data race, since those
> >> accesses are not serialized. I know that this is a very contrived
> >> example, but you never when this will become reality, so we should
> >> do the right thing now and just use `&mut self`, since that is exactly
> >> what it is for.
> > 
> > Kernel code is written for the use cases today, don't worry about
> > tomorrow, you can fix the issue tomorrow if you change something that
> > requires it.
> 
> The kind of coding style that (mis)-uses interior mutability is not
> something that we can change over night. We should do it properly to
> begin with.
> 

Agreed!

Sure, the bottom line is that kernel today works, but that's the bottom
line, right? In other words, nothing prevents us from a more reasonable
API with some future guards (we are not rush for anything). In fact, the
ponit of Rust is when you do abstraction on anything you should be clear
about the semantics (exclusive accesses, lock rules, etc.), so that such
semantics can be encoded in the type system (or unsafe requirement),
this is the bottom line of using Rust. Of course, there could be cases
where Rust cannot describe correctly (or easily) with the type system,
since kernel is a complicated beast. In that case, we can either 1) tell
Rust to improve or 2) make a sane work-around and move on. But no excuse
for not trying hard today if you're using Rust: if you don't have a
clear mind about the abstraction you're writting or cannot describe
it clearly, then you probably have bad Rust code, and bad Rust code same
as bad C code is what we try to avoid in kernel.

Also, for countless times, I heard people (usually maintainers) complain
someone about "no you use this (API) in a wrong way", "not future-proof"
seems to be a root of problems ;-) and having some sort of future guards
benefits everything.

> > And what "race" are you getting here?  You don't have threads in the
> > kernel :)
> 
> I chose threads, since I am a lot more familiar with that, but the
> kernel also has workqueues which execute stuff concurrently (if I
> remember correctly). We also have patches for bindings for the workqueue
> so they are not that far away.
> 
> > Also, if two things are setting the speed, wonderful, you get some sort
> > of value eventually, you have much bigger problems in your code as you
> > shouldn't have been doing that in the first place.
> 
> This is not allowed in Rust, it is UB and will lead to bad things.
> 
> >> Not that we do not even have a way to create threads on the Rust side
> >> at the moment.
> > 
> > Which is a good thing :)
> > 
> >> But we should already be thinking about any possible code pattern.
> > 
> > Again, no, deal with what we have today, kernel code is NOT
> > future-proof, that's not how we write this stuff.
> 
> While I made my argument for future proofing, I think that we
> should just be using the standard Rust stuff where it applies.
> 
> When you want to modify something, use `&mut T`, if not then use
> `&T`. Only deviate from this if you have a good argument.
> 

Right, so if the rule for C code is 1) working for the current users +
2) subsystem requirement, then for Rust code, there is simply an extra
rule: Be clear about the concept model of the API, and use Rust
abstraction correctly (mostly it's API soundness). Again, if you find
Rust doesn't work for your stuff, we are open to discuss, quite frankly
we know there could exist such a problem and are interesting to learn
from it (or throw it to Rust language community ;-)), but ignoring Rust
rules is not a good start.

Anyway, I just saw Tomo's v5, thanks!

Regards,
Boqun

> -- 
> Cheers,
> Benno
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ