lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZThcATP9zOoxb4Ec@dread.disaster.area>
Date:   Wed, 25 Oct 2023 11:06:25 +1100
From:   Dave Chinner <david@...morbit.com>
To:     Jens Axboe <axboe@...nel.dk>
Cc:     Andres Freund <andres@...razel.de>, Theodore Ts'o <tytso@....edu>,
        Thorsten Leemhuis <regressions@...mhuis.info>,
        Shreeya Patel <shreeya.patel@...labora.com>,
        linux-ext4@...r.kernel.org,
        Ricardo CaƱuelo 
        <ricardo.canuelo@...labora.com>, gustavo.padovan@...labora.com,
        zsm@...gle.com, garrick@...gle.com,
        Linux regressions mailing list <regressions@...ts.linux.dev>,
        io-uring@...r.kernel.org
Subject: Re: task hung in ext4_fallocate #2

On Tue, Oct 24, 2023 at 12:35:26PM -0600, Jens Axboe wrote:
> On 10/24/23 8:30 AM, Jens Axboe wrote:
> > I don't think this is related to the io-wq workers doing non-blocking
> > IO.

The io-wq worker that has deadlocked _must_ be doing blocking IO. If
it was doing non-blocking IO (i.e. IOCB_NOWAIT) then it would have
done a trylock and returned -EAGAIN to the worker for it to try
again later. I'm not sure that would avoid the issue, however - it
seems to me like it might just turn it into a livelock rather than a
deadlock....

> > The callback is eventually executed by the task that originally
> > submitted the IO, which is the owner and not the async workers. But...
> > If that original task is blocked in eg fallocate, then I can see how
> > that would potentially be an issue.
> > 
> > I'll take a closer look.
> 
> I think the best way to fix this is likely to have inode_dio_wait() be
> interruptible, and return -ERESTARTSYS if it should be restarted. Now
> the below is obviously not a full patch, but I suspect it'll make ext4
> and xfs tick, because they should both be affected.

How does that solve the problem? Nothing will issue a signal to the
process that is waiting in inode_dio_wait() except userspace, so I
can't see how this does anything to solve the problem at hand...

I'm also very leary of adding new error handling complexity to paths
like truncate, extent cloning, fallocate, etc which expect to block
on locks until they can perform the operation safely.

On further thinking, this could be a self deadlock with
just async direct IO submission - submit an async DIO with
IOCB_CALLER_COMP, then run an unaligned async DIO that attempts to
drain in-flight DIO before continuing. Then the thread waits in
inode_dio_wait() because it can't run the completion that will drop
the i_dio_count to zero.

Hence it appears to me that we've missed some critical constraints
around nesting IO submission and completion when using
IOCB_CALLER_COMP. Further, it really isn't clear to me how deep the
scope of this problem is yet, let alone what the solution might be.

With all this in mind, and how late this is in the 6.6 cycle, can we
just revert the IOCB_CALLER_COMP changes for now?

-Dave.
-- 
Dave Chinner
david@...morbit.com

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ