[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YrT6Hdqp36HLK9PJ@netflix>
Date: Thu, 23 Jun 2022 17:41:17 -0600
From: Tycho Andersen <tycho@...ho.pizza>
To: Vivek Goyal <vgoyal@...hat.com>
Cc: Eric Biederman <ebiederm@...ssion.com>,
Christian Brauner <brauner@...nel.org>,
Miklos Szeredi <miklos@...redi.hu>,
fuse-devel@...ts.sourceforge.net, linux-fsdevel@...r.kernel.org,
linux-kernel@...r.kernel.org
Subject: Re: strange interaction between fuse + pidns
On Thu, Jun 23, 2022 at 05:55:20PM -0400, Vivek Goyal wrote:
> So in this case single process is client as well as server. IOW, one
> thread is fuse server servicing fuse requests and other thread is fuse
> client accessing fuse filesystem?
Yes. Probably an abuse of the API and something people Should Not Do,
but as you say the kernel still shouldn't lock up like this.
> > since the thread has a copy of
> > the fd table with an fd pointing to the same fuse device, the reference
> > count isn't decremented to zero in fuse_dev_release(), and the task hangs
> > forever.
>
> So why did fuse server thread stop responding to fuse messages. Why
> did it not complete flush.
In this particular case I think it's because the application crashed
for unrelated reasons and tried to exit the pidns, hitting this
problem.
> BTW, unkillable wait happens on ly fc->no_interrupt = 1. And this seems
> to be set only if server probably some previous interrupt request
> returned -ENOSYS.
>
> fuse_dev_do_write() {
> else if (oh.error == -ENOSYS)
> fc->no_interrupt = 1;
> }
>
> So a simple workaround might be for server to implement support for
> interrupting requests.
Yes, but that is the libfuse default IIUC.
> Having said that, this does sounds like a problem and probably should
> be fixed at kernel level.
>
> >
> > diff --git a/fs/fuse/dev.c b/fs/fuse/dev.c
> > index 0e537e580dc1..c604dfcaec26 100644
> > --- a/fs/fuse/dev.c
> > +++ b/fs/fuse/dev.c
> > @@ -297,7 +297,6 @@ void fuse_request_end(struct fuse_req *req)
> > spin_unlock(&fiq->lock);
> > }
> > WARN_ON(test_bit(FR_PENDING, &req->flags));
> > - WARN_ON(test_bit(FR_SENT, &req->flags));
> > if (test_bit(FR_BACKGROUND, &req->flags)) {
> > spin_lock(&fc->bg_lock);
> > clear_bit(FR_BACKGROUND, &req->flags);
> > @@ -381,30 +380,33 @@ static void request_wait_answer(struct fuse_req *req)
> > queue_interrupt(req);
> > }
> >
> > - if (!test_bit(FR_FORCE, &req->flags)) {
> > - /* Only fatal signals may interrupt this */
> > - err = wait_event_killable(req->waitq,
> > - test_bit(FR_FINISHED, &req->flags));
> > - if (!err)
> > - return;
> > + /* Only fatal signals may interrupt this */
> > + err = wait_event_killable(req->waitq,
> > + test_bit(FR_FINISHED, &req->flags));
>
> Trying to do a fatal signal killable wait sounds reasonable. But I am
> not sure about the history.
>
> - Why FORCE requests can't do killable wait.
> - Why flush needs to have FORCE flag set.
args->force implies a few other things besides this killable wait in
fuse_simple_request(), most notably:
req = fuse_request_alloc(fm, GFP_KERNEL | __GFP_NOFAIL);
and
__set_bit(FR_WAITING, &req->flags);
seems like it probably can be invoked from some non-user/atomic
context somehow?
> > + if (!err)
> > + return;
> >
> > - spin_lock(&fiq->lock);
> > - /* Request is not yet in userspace, bail out */
> > - if (test_bit(FR_PENDING, &req->flags)) {
> > - list_del(&req->list);
> > - spin_unlock(&fiq->lock);
> > - __fuse_put_request(req);
> > - req->out.h.error = -EINTR;
> > - return;
> > - }
> > + spin_lock(&fiq->lock);
> > + /* Request is not yet in userspace, bail out */
> > + if (test_bit(FR_PENDING, &req->flags)) {
> > + list_del(&req->list);
> > spin_unlock(&fiq->lock);
> > + __fuse_put_request(req);
> > + req->out.h.error = -EINTR;
> > + return;
> > }
> > + spin_unlock(&fiq->lock);
> >
> > /*
> > - * Either request is already in userspace, or it was forced.
> > - * Wait it out.
> > + * Womp womp. We sent a request to userspace and now we're getting
> > + * killed.
> > */
> > - wait_event(req->waitq, test_bit(FR_FINISHED, &req->flags));
> > + set_bit(FR_INTERRUPTED, &req->flags);
> > + /* matches barrier in fuse_dev_do_read() */
> > + smp_mb__after_atomic();
> > + /* request *must* be FR_SENT here, because we ignored FR_PENDING before */
> > + WARN_ON(!test_bit(FR_SENT, &req->flags));
> > + queue_interrupt(req);
> > }
> >
> > static void __fuse_request_send(struct fuse_req *req)
> >
> > avaialble as a full patch here:
> > https://github.com/tych0/linux/commit/81b9ff4c8c1af24f6544945da808dbf69a1293f7
> >
> > but now things are even weirder. Tasks are stuck at the killable wait, but with
> > a SIGKILL pending for the thread group.
>
> That's strange. No idea what's going on.
Thanks for taking a look. This is where it falls apart for me. In
principle the patch seems simple, but this sleeping behavior is beyond
my understanding.
Tycho
Powered by blists - more mailing lists