[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <7fa16101-4528-26ee-f8ee-5eff13946e5e@arm.com>
Date: Fri, 30 Nov 2018 10:28:51 +0000
From: Jean-Philippe Brucker <jean-philippe.brucker@....com>
To: Jason Wang <jasowang@...hat.com>,
"xiangxia.m.yue@...il.com" <xiangxia.m.yue@...il.com>,
"mst@...hat.com" <mst@...hat.com>,
"makita.toshiaki@....ntt.co.jp" <makita.toshiaki@....ntt.co.jp>,
"davem@...emloft.net" <davem@...emloft.net>
Cc: "netdev@...r.kernel.org" <netdev@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>
Subject: Re: [REBASE PATCH net-next v9 1/4] net: vhost: lock the vqs one by
one
On 30/11/2018 02:34, Jason Wang wrote:
>
> On 2018/11/30 上午3:28, Jean-Philippe Brucker wrote:
>> Hi,
>>
>> On 25/09/2018 13:36,xiangxia.m.yue@...il.com wrote:
>>> From: Tonghao Zhang<xiangxia.m.yue@...il.com>
>>>
>>> This patch changes the way that lock all vqs
>>> at the same, to lock them one by one. It will
>>> be used for next patch to avoid the deadlock.
>>>
>>> Signed-off-by: Tonghao Zhang<xiangxia.m.yue@...il.com>
>>> Acked-by: Jason Wang<jasowang@...hat.com>
>>> Signed-off-by: Jason Wang<jasowang@...hat.com>
>>> ---
>>> drivers/vhost/vhost.c | 24 +++++++-----------------
>>> 1 file changed, 7 insertions(+), 17 deletions(-)
>>>
>>> diff --git a/drivers/vhost/vhost.c b/drivers/vhost/vhost.c
>>> index b13c6b4..f52008b 100644
>>> --- a/drivers/vhost/vhost.c
>>> +++ b/drivers/vhost/vhost.c
>>> @@ -294,8 +294,11 @@ static void vhost_vq_meta_reset(struct vhost_dev *d)
>>> {
>>> int i;
>>>
>>> - for (i = 0; i < d->nvqs; ++i)
>>> + for (i = 0; i < d->nvqs; ++i) {
>>> + mutex_lock(&d->vqs[i]->mutex);
>>> __vhost_vq_meta_reset(d->vqs[i]);
>>> + mutex_unlock(&d->vqs[i]->mutex);
>>> + }
>>> }
>>>
>>> static void vhost_vq_reset(struct vhost_dev *dev,
>>> @@ -891,20 +894,6 @@ static inline void __user *__vhost_get_user(struct vhost_virtqueue *vq,
>>> #define vhost_get_used(vq, x, ptr) \
>>> vhost_get_user(vq, x, ptr, VHOST_ADDR_USED)
>>>
>>> -static void vhost_dev_lock_vqs(struct vhost_dev *d)
>>> -{
>>> - int i = 0;
>>> - for (i = 0; i < d->nvqs; ++i)
>>> - mutex_lock_nested(&d->vqs[i]->mutex, i);
>>> -}
>>> -
>>> -static void vhost_dev_unlock_vqs(struct vhost_dev *d)
>>> -{
>>> - int i = 0;
>>> - for (i = 0; i < d->nvqs; ++i)
>>> - mutex_unlock(&d->vqs[i]->mutex);
>>> -}
>>> -
>>> static int vhost_new_umem_range(struct vhost_umem *umem,
>>> u64 start, u64 size, u64 end,
>>> u64 userspace_addr, int perm)
>>> @@ -954,7 +943,10 @@ static void vhost_iotlb_notify_vq(struct vhost_dev *d,
>>> if (msg->iova <= vq_msg->iova &&
>>> msg->iova + msg->size - 1 >= vq_msg->iova &&
>>> vq_msg->type == VHOST_IOTLB_MISS) {
>>> + mutex_lock(&node->vq->mutex);
>> This seems to introduce a deadlock (and sleep-in-atomic): the vq->mutex
>> is taken while the IOTLB spinlock is held (taken earlier in
>> vhost_iotlb_notify_vq()). On the vhost_iotlb_miss() path, the IOTLB
>> spinlock is taken while the vq->mutex is held.
>
>
> Good catch.
>
>
>> I'm not sure how to fix it. Given that we're holding dev->mutex, that
>> vq->poll only seems to be modified under dev->mutex, and assuming that
>> vhost_poll_queue(vq->poll) can be called concurrently, is it safe to
>> simply not take vq->mutex here?
>
>
> Yes, I think it can be removed here.
>
> Want to post a patch for this?
Yes, I'll post it shortly
Thanks,
Jean
Powered by blists - more mailing lists