lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Wed, 18 May 2022 06:56:18 -0600
From:   Jens Axboe <axboe@...nel.dk>
To:     Lee Jones <lee.jones@...aro.org>
Cc:     Pavel Begunkov <asml.silence@...il.com>, io-uring@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [REPORT] Use-after-free Read in __fdget_raw in v5.10.y

On 5/18/22 6:54 AM, Jens Axboe wrote:
> On 5/18/22 6:52 AM, Jens Axboe wrote:
>> On 5/18/22 6:50 AM, Lee Jones wrote:
>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>
>>>> On 5/17/22 7:00 AM, Lee Jones wrote:
>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>
>>>>>> On 5/17/22 6:36 AM, Lee Jones wrote:
>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>>>
>>>>>>>> On 5/17/22 6:24 AM, Lee Jones wrote:
>>>>>>>>> On Tue, 17 May 2022, Jens Axboe wrote:
>>>>>>>>>
>>>>>>>>>> On 5/17/22 5:41 AM, Lee Jones wrote:
>>>>>>>>>>> Good afternoon Jens, Pavel, et al.,
>>>>>>>>>>>
>>>>>>>>>>> Not sure if you are presently aware, but there appears to be a
>>>>>>>>>>> use-after-free issue affecting the io_uring worker driver (fs/io-wq.c)
>>>>>>>>>>> in Stable v5.10.y.
>>>>>>>>>>>
>>>>>>>>>>> The full sysbot report can be seen below [0].
>>>>>>>>>>>
>>>>>>>>>>> The C-reproducer has been placed below that [1].
>>>>>>>>>>>
>>>>>>>>>>> I had great success running this reproducer in an infinite loop.
>>>>>>>>>>>
>>>>>>>>>>> My colleague reverse-bisected the fixing commit to:
>>>>>>>>>>>
>>>>>>>>>>>   commit fb3a1f6c745ccd896afadf6e2d6f073e871d38ba
>>>>>>>>>>>   Author: Jens Axboe <axboe@...nel.dk>
>>>>>>>>>>>   Date:   Fri Feb 26 09:47:20 2021 -0700
>>>>>>>>>>>
>>>>>>>>>>>        io-wq: have manager wait for all workers to exit
>>>>>>>>>>>
>>>>>>>>>>>        Instead of having to wait separately on workers and manager, just have
>>>>>>>>>>>        the manager wait on the workers. We use an atomic_t for the reference
>>>>>>>>>>>        here, as we need to start at 0 and allow increment from that. Since the
>>>>>>>>>>>        number of workers is naturally capped by the allowed nr of processes,
>>>>>>>>>>>        and that uses an int, there is no risk of overflow.
>>>>>>>>>>>
>>>>>>>>>>>        Signed-off-by: Jens Axboe <axboe@...nel.dk>
>>>>>>>>>>>
>>>>>>>>>>>     fs/io-wq.c | 30 ++++++++++++++++++++++--------
>>>>>>>>>>>     1 file changed, 22 insertions(+), 8 deletions(-)
>>>>>>>>>>
>>>>>>>>>> Does this fix it:
>>>>>>>>>>
>>>>>>>>>> commit 886d0137f104a440d9dfa1d16efc1db06c9a2c02
>>>>>>>>>> Author: Jens Axboe <axboe@...nel.dk>
>>>>>>>>>> Date:   Fri Mar 5 12:59:30 2021 -0700
>>>>>>>>>>
>>>>>>>>>>     io-wq: fix race in freeing 'wq' and worker access
>>>>>>>>>>
>>>>>>>>>> Looks like it didn't make it into 5.10-stable, but we can certainly
>>>>>>>>>> rectify that.
>>>>>>>>>
>>>>>>>>> Thanks for your quick response Jens.
>>>>>>>>>
>>>>>>>>> This patch doesn't apply cleanly to v5.10.y.
>>>>>>>>
>>>>>>>> This is probably why it never made it into 5.10-stable :-/
>>>>>>>
>>>>>>> Right.  It doesn't apply at all unfortunately.
>>>>>>>
>>>>>>>>> I'll have a go at back-porting it.  Please bear with me.
>>>>>>>>
>>>>>>>> Let me know if you into issues with that and I can help out.
>>>>>>>
>>>>>>> I think the dependency list is too big.
>>>>>>>
>>>>>>> Too much has changed that was never back-ported.
>>>>>>>
>>>>>>> Actually the list of patches pertaining to fs/io-wq.c alone isn't so
>>>>>>> bad, I did start to back-port them all but some of the big ones have
>>>>>>> fs/io_uring.c changes incorporated and that list is huge (256 patches
>>>>>>> from v5.10 to the fixing patch mentioned above).
>>>>>>
>>>>>> The problem is that 5.12 went to the new worker setup, and this patch
>>>>>> landed after that even though it also applies to the pre-native workers.
>>>>>> Hence the dependency chain isn't really as long as it seems, probably
>>>>>> just a few patches backporting the change references and completions.
>>>>>>
>>>>>> I'll take a look this afternoon.
>>>>>
>>>>> Thanks Jens.  I really appreciate it.
>>>>
>>>> Can you see if this helps? Untested...
>>>
>>> What base does this apply against please?
>>>
>>> I tried Mainline and v5.10.116 and both failed.
>>
>> It's against 5.10.116, so that's puzzling. Let me double check I sent
>> the right one...
> 
> Looks like I sent the one from the wrong directory, sorry about that.
> This one should be better:

Nope, both are the right one. Maybe your mailer is mangling the patch?
I'll attach it gzip'ed here in case that helps.

-- 
Jens Axboe

Download attachment "patch.gz" of type "application/gzip" (1021 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ