[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <YdMgCS1RMcb5V2RJ@localhost.localdomain>
Date: Mon, 3 Jan 2022 11:10:49 -0500
From: Josef Bacik <josef@...icpanda.com>
To: Yongji Xie <xieyongji@...edance.com>
Cc: Christoph Hellwig <hch@...radead.org>,
Jens Axboe <axboe@...nel.dk>,
Bart Van Assche <bvanassche@....org>,
linux-block@...r.kernel.org, nbd@...er.debian.org,
linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v2] nbd: Don't use workqueue to handle recv work
On Thu, Dec 30, 2021 at 12:01:23PM +0800, Yongji Xie wrote:
> On Thu, Dec 30, 2021 at 1:35 AM Christoph Hellwig <hch@...radead.org> wrote:
> >
> > On Mon, Dec 27, 2021 at 05:12:41PM +0800, Xie Yongji wrote:
> > > The rescuer thread might take over the works queued on
> > > the workqueue when the worker thread creation timed out.
> > > If this happens, we have no chance to create multiple
> > > recv threads which causes I/O hung on this nbd device.
> >
> > If a workqueue is used there aren't really 'receive threads'.
> > What is the deadlock here?
>
> We might have multiple recv works, and those recv works won't quit
> unless the socket is closed. If the rescuer thread takes over those
> works, only the first recv work can run. The I/O needed to be handled
> in other recv works would be hung since no thread can handle them.
>
I'm not following this explanation. What is the rescuer thread you're talking
about? If there's an error we close the socket which will error out the recvmsg
which will make the recv workqueue close down.
> In that case, we can see below stacks in rescuer thread:
>
> __schedule
> schedule
> scheule_timeout
> unix_stream_read_generic
> unix_stream_recvmsg
> sock_xmit
> nbd_read_stat
> recv_work
> process_one_work
> rescuer_thread
> kthread
> ret_from_fork
This is just the thing hanging waiting for an incoming request, so this doesn't
tell me anything. Thanks,
Josef
Powered by blists - more mailing lists