lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Message-ID: <f26a3e1a-7eb8-464e-9cbe-ebb8bdf69b20@proton.me> Date: Tue, 17 Oct 2023 14:04:33 +0000 From: Benno Lossin <benno.lossin@...ton.me> To: Andrew Lunn <andrew@...n.ch> Cc: FUJITA Tomonori <fujita.tomonori@...il.com>, netdev@...r.kernel.org, rust-for-linux@...r.kernel.org, miguel.ojeda.sandonis@...il.com, tmgross@...ch.edu, boqun.feng@...il.com, wedsonaf@...il.com, greg@...ah.com Subject: Re: [PATCH net-next v4 1/4] rust: core abstractions for network PHY drivers On 17.10.23 14:38, Andrew Lunn wrote: >>> Because set_speed() updates the member in phy_device and read() >>> updates the object that phy_device points to? >> >> `set_speed` is entirely implemented on the Rust side and is not protected >> by a lock. > > With the current driver, all entry points into the driver are called > from the phylib core, and the core guarantees that the lock is > taken. So it should not matter if its entirely implemented in the Rust > side, somewhere up the call stack, the lock was taken. Sure that might be the case, I am trying to guard against this future problem: fn soft_reset(driver: &mut Driver) -> Result { let driver = driver thread::scope(|s| { let thread_a = s.spawn(|| { for _ in 0..100_000_000 { driver.set_speed(10); } }); let thread_b = s.spawn(|| { for _ in 0..100_000_000 { driver.set_speed(10); } }); thread_a.join(); thread_b.join(); }); Ok(()) } This code spawns two new threads both of which can call `set_speed`, since it takes `&self`. But this leads to a data race, since those accesses are not serialized. I know that this is a very contrived example, but you never when this will become reality, so we should do the right thing now and just use `&mut self`, since that is exactly what it is for. Not that we do not even have a way to create threads on the Rust side at the moment. But we should already be thinking about any possible code pattern. >>>> What about these functions? >>>> - resolve_aneg_linkmode >>>> - genphy_soft_reset >>>> - init_hw >>>> - start_aneg >>>> - genphy_read_status >>>> - genphy_update_link >>>> - genphy_read_lpa >>>> - genphy_read_abilities >>> >>> As Andrew replied, all the functions update some member in phy_device. >> >> Do all of these functions lock the `bus->mdio_lock`? > > When accessing the hardware, yes. > > The basic architecture is that at the bottom we have an MDIO bus, and > on top of that bus, we have a number of devices. The MDIO core will > serialise access to the bus, so only one device on the bus can be > accessed at once. The phylib core will serialise access to the PHY, > but when there are multiple PHYs, the phylib core will allow parallel > access to different PHYs. > > In summary, the core of each layer protects the drivers using that > layer from multiple parallel accesses from above. Thanks for this explanation, it really helps! -- Cheers, Benno
Powered by blists - more mailing lists