[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20141120164711.GA7495@redhat.com>
Date: Thu, 20 Nov 2014 18:47:11 +0200
From: "Michael S. Tsirkin" <mst@...hat.com>
To: Tejun Heo <tj@...nel.org>
Cc: Petr Mladek <pmladek@...e.cz>,
Rusty Russell <rusty@...tcorp.com.au>,
Jeff Epler <jepler@...ythonic.net>,
Jiri Kosina <jkosina@...e.cz>,
virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v3] virtio_balloon: Convert "vballoon" kthread into a
workqueue
On Thu, Nov 20, 2014 at 11:29:35AM -0500, Tejun Heo wrote:
> On Thu, Nov 20, 2014 at 06:26:24PM +0200, Michael S. Tsirkin wrote:
> > On Thu, Nov 20, 2014 at 06:25:43PM +0200, Michael S. Tsirkin wrote:
> > > On Thu, Nov 20, 2014 at 11:07:46AM -0500, Tejun Heo wrote:
> > > > On Thu, Nov 20, 2014 at 05:03:17PM +0100, Petr Mladek wrote:
> > > > ...
> > > > > @@ -476,7 +460,6 @@ static void virtballoon_remove(struct virtio_device *vdev)
> > > > > {
> > > > > struct virtio_balloon *vb = vdev->priv;
> > > > >
> > > > > - kthread_stop(vb->thread);
> > > > > remove_common(vb);
> > > > > kfree(vb);
> > > > > }
> > > >
> > > > Shouldn't the work item be flushed before removal is complete?
> > >
> > > In fact, flushing it won't help because it can requeue itself, right?
>
> There's cancel_work_sync() to stop the self-requeueing ones.
What happens if queue_work runs while cancel_work_sync is in progress?
Does it fail to queue?
> > From that POV a dedicated WQ kept it simple.
>
> A dedicated wq doesn't do anything for that. You can't shut down a
> workqueue with a pending work item on it. destroy_workqueue() will
> try to drain the target wq, warn if it doesn't finish in certain
> number of iterations and just keep trying indefinitely.
>
> Thanks.
Right, so eventually we'll stop requeueuing and it will succeed?
> --
> tejun
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists