[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1279217878.8566.6.camel@w-sridhar.beaverton.ibm.com>
Date: Thu, 15 Jul 2010 11:17:57 -0700
From: Sridhar Samudrala <sri@...ibm.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: "David S. Miller" <davem@...emloft.net>,
Arnd Bergmann <arnd@...db.de>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
kvm@...r.kernel.org, virtualization@...ts.osdl.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH] vhost-net: avoid flush under lock
On Thu, 2010-07-15 at 15:19 +0300, Michael S. Tsirkin wrote:
> We flush under vq mutex when changing backends.
> This creates a deadlock as workqueue being flushed
> needs this lock as well.
>
> https://bugzilla.redhat.com/show_bug.cgi?id=612421
>
> Drop the vq mutex before flush: we have the device mutex
> which is sufficient to prevent another ioctl from touching
> the vq.
Why do we need to flush the vq when trying to set the backend and
we find that it is already set. Is this just an optimization?
Thanks
Sridhar
>
> Signed-off-by: Michael S. Tsirkin <mst@...hat.com>
> ---
> drivers/vhost/net.c | 5 +++++
> 1 files changed, 5 insertions(+), 0 deletions(-)
>
> diff --git a/drivers/vhost/net.c b/drivers/vhost/net.c
> index 28d7786..50df58e6 100644
> --- a/drivers/vhost/net.c
> +++ b/drivers/vhost/net.c
> @@ -534,11 +534,16 @@ static long vhost_net_set_backend(struct vhost_net *n, unsigned index, int fd)
> rcu_assign_pointer(vq->private_data, sock);
> vhost_net_enable_vq(n, vq);
> done:
> + mutex_unlock(&vq->mutex);
> +
> if (oldsock) {
> vhost_net_flush_vq(n, index);
> fput(oldsock->file);
> }
>
> + mutex_unlock(&n->dev.mutex);
> + return 0;
> +
> err_vq:
> mutex_unlock(&vq->mutex);
> err:
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists