lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <87mt98nwny.fsf@wdc.com>
Date:   Thu, 03 Nov 2022 11:38:42 +0100
From:   Andreas Hindborg <andreas.hindborg@....com>
To:     Björn Roy Baron <bjorn3_gh@...tonmail.com>
Cc:     Dennis Dai <dzy.0424thu@...il.com>,
        Miguel Ojeda <ojeda@...nel.org>,
        Alex Gaynor <alex.gaynor@...il.com>,
        Wedson Almeida Filho <wedsonaf@...il.com>,
        Boqun Feng <boqun.feng@...il.com>, Gary Guo <gary@...yguo.net>,
        rust-for-linux@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: rust nvme driver: potential sleep-in-atomic-context


Björn Roy Baron <bjorn3_gh@...tonmail.com> writes:

> On Thursday, November 3rd, 2022 at 07:12, Dennis Dai <dzy.0424thu@...il.com> wrote:
>
>
>> The rust nvme driver [1] (which is still pending to be merged into
>> mainline [2]) has a potential sleep-in-atomic-context bug.
>>
>> The potential buggy code is below
>>
>> // drivers/block/nvme.rs:192
>> dev.queues.lock().io.try_reserve(nr_io_queues as _)?;
>> // drivers/block/nvme.rs:227
>> dev.queues.lock().io.try_push(io_queue.clone())?;
>>
>> The queues field is wrapped in SpinLock, which means that we cannot
>> sleep (or indirectly call any function that may sleep) when the lock
>> is held.
>> However try_reserve function may indirectly call krealloc with a
>> sleepable flag GFP_KERNEL (that's default behaviour of the global rust
>> allocator).
>> The the case is similar for try_push.
>>
>> I wonder if the bug could be confirmed.
>>
>>
>> [1] https://github.com/metaspace/rust-linux/commit/d88c3744d6cbdf11767e08bad56cbfb67c4c96d0
>> [2] https://lore.kernel.org/lkml/202210010816.1317F2C@keescook/
>
> setup_io_queues is only called by dev_add which in turn is only called
> NvmeDevice::probe. This last function is responsible for creating the
> &Ref<DeviceData> that ends up being passed to setup_io_queues. It doesn't seem
> like any reference is passed to another thread between &Ref<DeviceData>. As such
> no other thread can block on the current thread due to holding the lock. As far
> as I understand this means that sleeping while the lock is held is harmless. I
> think it would be possible to replace the &Ref<DeviceData> argument with an
> Pin<&mut DeviceData> argument by moving the add_dev call to before
> Ref::<DeviceData>::from(data). This would make it clear that only the current
> thread holds a reference and would also allow using a method like get_mut [1] to
> get a reference to the protected data without actually locking the spinlock as
> it is statically enforced that nobody can else can hold the lock. 

I think you are right. The lock is just there to allow interior
mutability of the queue arrays. I could try to shuffle stuff around and
move queue setup before converting `data` to a Ref. That should be fine
as far as I can tell.

BR Andreas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ