[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230518-tacker-ahnen-8eb944bed795@brauner>
Date: Thu, 18 May 2023 17:09:11 +0200
From: Christian Brauner <brauner@...nel.org>
To: Mike Christie <michael.christie@...cle.com>
Cc: oleg@...hat.com, linux@...mhuis.info, nicolas.dichtel@...nd.com,
axboe@...nel.dk, ebiederm@...ssion.com,
torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
virtualization@...ts.linux-foundation.org, mst@...hat.com,
sgarzare@...hat.com, jasowang@...hat.com, stefanha@...hat.com
Subject: Re: [RFC PATCH 5/8] vhost: Add callback that stops new work and
waits on running ones
On Thu, May 18, 2023 at 10:03:32AM -0500, Mike Christie wrote:
> On 5/18/23 9:18 AM, Christian Brauner wrote:
> >> @@ -352,12 +353,13 @@ static int vhost_worker(void *data)
> >> if (!node) {
> >> schedule();
> >> /*
> >> - * When we get a SIGKILL our release function will
> >> - * be called. That will stop new IOs from being queued
> >> - * and check for outstanding cmd responses. It will then
> >> - * call vhost_task_stop to exit us.
> >> + * When we get a SIGKILL we kick off a work to
> >> + * run the driver's helper to stop new work and
> >> + * handle completions. When they are done they will
> >> + * call vhost_task_stop to tell us to exit.
> >> */
> >> - vhost_task_get_signal();
> >> + if (vhost_task_get_signal())
> >> + schedule_work(&dev->destroy_worker);
> >> }
> >
> > I'm pretty sure you still need to actually call exit here. Basically
> > mirror what's done in io_worker_exit() minus the io specific bits.
>
> We do call do_exit(). Once destory_worker has flushed the device and
> all outstanding IO has completed it call vhost_task_stop(). vhost_worker()
> above then breaks out of the loop and returns and vhost_task_fn() does
> do_exit().
Ah, that callchain wasn't obvious. Thanks.
Powered by blists - more mailing lists