[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <00a0809e-7b47-c43c-3a13-a84cd692f514@kernel.dk>
Date: Mon, 6 Feb 2023 14:56:57 -0700
From: Jens Axboe <axboe@...nel.dk>
To: Dominique Martinet <asmadeus@...ewreck.org>
Cc: Christian Schoenebeck <linux_oss@...debyte.com>,
Eric Van Hensbergen <ericvh@...nel.org>,
netdev <netdev@...r.kernel.org>,
Jakub Kicinski <kuba@...nel.org>,
Pengfei Xu <pengfei.xu@...el.com>,
v9fs-developer@...ts.sourceforge.net
Subject: Re: [PATCH v2] 9p/client: don't assume signal_pending() clears on
recalc_sigpending()
On 2/6/23 2:42?PM, Dominique Martinet wrote:
> Jens Axboe wrote on Mon, Feb 06, 2023 at 01:19:24PM -0700:
>>> I agree with your assessment that we can't use task_work_run(), I assume
>>> it's also quite bad to just clear the flag?
>>> I'm not familiar with these task at all, in which case do they happen?
>>> Would you be able to share an easy reproducer so that I/someone can try
>>> on various transports?
>>
>> You can't just clear the flag without also running the task_work. Hence
>> it either needs to be done right there, or leave it pending and let the
>> exit to userspace take care of it.
>
> Sorry I didn't develop that idea; the signal path resets the pending
> signal when we're done, I assumed we could also reset the TWA_SIGNAL
> flag when we're done flushing. That might take a while though, so it's
> far from optimal.
Sure, if you set it again when done, then it will probably work just
fine. But you need to treat TIF_NOTIFY_SIGNAL and TIF_SIGPENDING
separately. An attempt at that at the end of this email, totally
untested, and I'm not certain it's a good idea at all (see below). Is
there a reason why we can't exit and get the task_work processed
instead? That'd be greatly preferable.
>>> If it's "rare enough" I'd say sacrificing the connection might make more
>>> sense than a busy loop, but if it's becoming common I think we'll need
>>> to spend some more time thinking about it...
>>> It might be less effort to dig out my async flush commits if this become
>>> too complicated, but I wish I could say I have time for it...
>>
>> It can be a number of different things - eg fput() will do it.
>
> Hm, schedule_delayed_work on the last fput, ok.
> I was wondering what it had to do with the current 9p thread, but since
> it's not scheduled on a particular cpu it can pick another cpu to wake
> up, that makes sense -- although conceptually it feels rather bad to
> interrupt a remote IO because of a local task that can be done later;
> e.g. between having the fput wait a bit, or cancel a slow operation like
> a 1MB write, I'd rather make the fput wait.
> Do you know why that signal/interrupt is needed in the first place?
It's needed if the task is currently sleeping in the kernel, to abort a
sleeping loop. The task_work may contain actions that will result in the
sleep loop being satisfied and hence ending, which means it needs to be
processed. That's my worry with the check-and-clear, then reset state
solution.
>> The particular case that I came across was io_uring, which will use
>> TWA_SIGNAL based task_work for retry operations (and other things). If
>> you use io_uring, and depending on how you setup the ring, it can be
>> quite common or will never happen. Dropping the connection task_work
>> being pending is not a viable solution, I'm afraid.
>
> Thanks for confirming that it's perfectly normal, let's not drop
> connections :)
>
> My preferred approach is still to try and restore the async flush code,
> but that will take a while -- it's not something that'll work right away
> and I want some tests so it won't be ready for this merge window.
> If we can have some sort of workaround until then it'll probably be for
> the best, but I don't have any other idea (than temporarily clearing the
> flag) at this point.
>
> I'll setup some uring IO on 9p and see if I can produce these.
I'm attaching a test case. I don't think it's particularly useful, but
it does nicely demonstrate the infinite loop that 9p gets into if
there's task_work pending.
--
Jens Axboe
View attachment "repro.c" of type "text/x-csrc" (19895 bytes)
Powered by blists - more mailing lists