lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 03 Nov 2022 09:38:10 +0000
From:   Björn Roy Baron <bjorn3_gh@...tonmail.com>
To:     Dennis Dai <dzy.0424thu@...il.com>
Cc:     Miguel Ojeda <ojeda@...nel.org>,
        Alex Gaynor <alex.gaynor@...il.com>,
        Wedson Almeida Filho <wedsonaf@...gle.com>,
        Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
        rust-for-linux@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: rust nvme driver: potential sleep-in-atomic-context

On Thursday, November 3rd, 2022 at 07:12, Dennis Dai <dzy.0424thu@...il.com> wrote:


> The rust nvme driver [1] (which is still pending to be merged into
> mainline [2]) has a potential sleep-in-atomic-context bug.
> 
> The potential buggy code is below
> 
> // drivers/block/nvme.rs:192
> dev.queues.lock().io.try_reserve(nr_io_queues as _)?;
> // drivers/block/nvme.rs:227
> dev.queues.lock().io.try_push(io_queue.clone())?;
> 
> The queues field is wrapped in SpinLock, which means that we cannot
> sleep (or indirectly call any function that may sleep) when the lock
> is held.
> However try_reserve function may indirectly call krealloc with a
> sleepable flag GFP_KERNEL (that's default behaviour of the global rust
> allocator).
> The the case is similar for try_push.
> 
> I wonder if the bug could be confirmed.
> 
> 
> [1] https://github.com/metaspace/rust-linux/commit/d88c3744d6cbdf11767e08bad56cbfb67c4c96d0
> [2] https://lore.kernel.org/lkml/202210010816.1317F2C@keescook/

setup_io_queues is only called by dev_add which in turn is only called NvmeDevice::probe. This last function is responsible for creating the &Ref<DeviceData> that ends up being passed to setup_io_queues. It doesn't seem like any reference is passed to another thread between &Ref<DeviceData>. As such no other thread can block on the current thread due to holding the lock. As far as I understand this means that sleeping while the lock is held is harmless. I think it would be possible to replace the &Ref<DeviceData> argument with an Pin<&mut DeviceData> argument by moving the add_dev call to before Ref::<DeviceData>::from(data). This would make it clear that only the current thread holds a reference and would also allow using a method like get_mut [1] to get a reference to the protected data without actually locking the spinlock as it is statically enforced that nobody can else can hold the lock. It seems that get_mut is missing from all of the locks offered in the kernel crate. I opened an issue for this. [2]

[1]: https://doc.rust-lang.org/stable/std/sync/struct.Mutex.html#method.get_mut
[2]: https://github.com/Rust-for-Linux/linux/issues/924

Cheers,
Björn

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ