lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bb423622f97826f483100a1a7f20ce10a9090158.camel@trillion01.com>
Date:   Mon, 22 Aug 2022 23:35:37 -0400
From:   Olivier Langlois <olivier@...llion01.com>
To:     "Eric W. Biederman" <ebiederm@...ssion.com>,
        Jens Axboe <axboe@...nel.dk>
Cc:     Pavel Begunkov <asml.silence@...il.com>,
        linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
        io-uring@...r.kernel.org, Alexander Viro <viro@...iv.linux.org.uk>,
        Oleg Nesterov <oleg@...hat.com>,
        Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH 2/2] coredump: Allow coredumps to pipes to work with
 io_uring

On Mon, 2022-08-22 at 17:16 -0400, Olivier Langlois wrote:
> 
> What is stopping the task calling do_coredump() to be interrupted and
> call task_work_add() from the interrupt context?
> 
> This is precisely what I was experiencing last summer when I did work
> on this issue.
> 
> My understanding of how async I/O works with io_uring is that the
> task
> is added to a wait queue without being put to sleep and when the
> io_uring callback is called from the interrupt context,
> task_work_add()
> is called so that the next time io_uring syscall is invoked, pending
> work is processed to complete the I/O.
> 
> So if:
> 
> 1. io_uring request is initiated AND the task is in a wait queue
> 2. do_coredump() is called before the I/O is completed
> 
> IMHO, this is how you end up having task_work_add() called while the
> coredump is generated.
> 
I forgot to add that I have experienced the issue with TCP/IP I/O.

I suspect that with a TCP socket, the race condition window is much
larger than if it was disk I/O and this might make it easier to
reproduce the issue this way...

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ