[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y99+yzngN/8tJKUq@codewreck.org>
Date: Sun, 5 Feb 2023 19:02:51 +0900
From: Dominique Martinet <asmadeus@...ewreck.org>
To: Jens Axboe <axboe@...nel.dk>,
Christian Schoenebeck <linux_oss@...debyte.com>,
Eric Van Hensbergen <ericvh@...nel.org>
Cc: netdev <netdev@...r.kernel.org>, Jakub Kicinski <kuba@...nel.org>,
Pengfei Xu <pengfei.xu@...el.com>,
v9fs-developer@...ts.sourceforge.net
Subject: Re: [PATCH v2] 9p/client: don't assume signal_pending() clears on
recalc_sigpending()
meta-comment: 9p is usually handled separately from netdev, I just saw
this by chance when Simon replied to v1 -- please cc
v9fs-developer@...ts.sourceforge.net for v3 if there is one
(well, it's a bit of a weird tree because patches are sometimes taken
through -net...)
Also added Christian (virtio 9p) and Eric (second maintainer) to Tos for
attention.
Jens Axboe wrote on Fri, Feb 03, 2023 at 09:04:28AM -0700:
> signal_pending() really means that an exit to userspace is required to
> clear the condition, as it could be either an actual signal, or it could
> be TWA_SIGNAL based task_work that needs processing. The 9p client
> does a recalc_sigpending() to take care of the former, but that still
> leaves TWA_SIGNAL task_work. The result is that if we do have TWA_SIGNAL
> task_work pending, then we'll sit in a tight loop spinning as
> signal_pending() remains true even after recalc_sigpending().
>
> Move the signal_pending() logic into a helper that deals with both, and
> return -ERESTARTSYS if the reason for signal_pendding() being true is
> that we have task_work to process.
> Link: https://lore.kernel.org/lkml/Y9TgUupO5C39V%2FDW@xpf.sh.intel.com/
> Reported-and-tested-by: Pengfei Xu <pengfei.xu@...el.com>
> Signed-off-by: Jens Axboe <axboe@...nel.dk>
> ---
> v2: don't rely on task_work_run(), rather just punt with -ERESTARTYS at
> that point. For one, we don't want to export task_work_run(), it's
> in-kernel only. And secondly, we need to ensure we have a sane state
> before running task_work. The latter did look fine before, but this
> should be saner. Tested this also fixes the report as well for me.
Hmm, just bailing out here is a can of worm -- when we get the reply
from server depending on the transport hell might break loose (zc
requests in particular on virtio will probably just access the memory
anyway... fd will consider it got a bogus reply and close the connection
which is a lesser evil but still not appropriatey)
We really need to get rid of that retry loop in the first place, and req
refcounting I added a couple of years ago was a first step towards async
flush which will help with that, but the async flush code had a bug I
never found time to work out so it never made it and we need an
immediate fix.
... Just looking at code out loud, sorry for rambling: actually that
signal handling in virtio is already out of p9_virtio_zc_request() so
the pages are already unpinned by the time we do that flush, and I guess
it's not much worse -- refcounting will make it "mostly work" exactly as
it does now, as in the pages won't be freed until we actually get the
reply, so the pages can get moved underneath virtio which is bad but is
the same as right now, and I guess it's a net improvement?
I agree with your assessment that we can't use task_work_run(), I assume
it's also quite bad to just clear the flag?
I'm not familiar with these task at all, in which case do they happen?
Would you be able to share an easy reproducer so that I/someone can try
on various transports?
If it's "rare enough" I'd say sacrificing the connection might make more
sense than a busy loop, but if it's becoming common I think we'll need
to spend some more time thinking about it...
It might be less effort to dig out my async flush commits if this become
too complicated, but I wish I could say I have time for it...
Thanks!
>
> diff --git a/net/9p/client.c b/net/9p/client.c
> index 622ec6a586ee..9caa66cbd5b7 100644
> --- a/net/9p/client.c
> +++ b/net/9p/client.c
> @@ -652,6 +652,25 @@ static struct p9_req_t *p9_client_prepare_req(struct p9_client *c,
> return ERR_PTR(err);
> }
>
> +static int p9_sigpending(int *sigpending)
> +{
> + *sigpending = 0;
> +
> + if (!signal_pending(current))
> + return 0;
> +
> + /*
> + * If we have a TIF_NOTIFY_SIGNAL pending, abort to get it
> + * processed.
> + */
> + if (test_thread_flag(TIF_NOTIFY_SIGNAL))
> + return -ERESTARTSYS;
> +
> + *sigpending = 1;
> + clear_thread_flag(TIF_SIGPENDING);
> + return 0;
> +}
> +
> /**
> * p9_client_rpc - issue a request and wait for a response
> * @c: client session
> @@ -687,12 +706,9 @@ p9_client_rpc(struct p9_client *c, int8_t type, const char *fmt, ...)
> req->tc.zc = false;
> req->rc.zc = false;
>
> - if (signal_pending(current)) {
> - sigpending = 1;
> - clear_thread_flag(TIF_SIGPENDING);
> - } else {
> - sigpending = 0;
> - }
> + err = p9_sigpending(&sigpending);
> + if (err)
> + goto reterr;
>
> err = c->trans_mod->request(c, req);
> if (err < 0) {
> @@ -789,12 +805,9 @@ static struct p9_req_t *p9_client_zc_rpc(struct p9_client *c, int8_t type,
> req->tc.zc = true;
> req->rc.zc = true;
>
> - if (signal_pending(current)) {
> - sigpending = 1;
> - clear_thread_flag(TIF_SIGPENDING);
> - } else {
> - sigpending = 0;
> - }
> + err = p9_sigpending(&sigpending);
> + if (err)
> + goto reterr;
>
> err = c->trans_mod->zc_request(c, req, uidata, uodata,
> inlen, olen, in_hdrlen);
>
Powered by blists - more mailing lists