[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20230928164550.980832-9-dtatulea@nvidia.com>
Date: Thu, 28 Sep 2023 19:45:18 +0300
From: Dragos Tatulea <dtatulea@...dia.com>
To: <eperezma@...hat.com>, <gal@...dia.com>,
"Michael S . Tsirkin" <mst@...hat.com>,
Jason Wang <jasowang@...hat.com>,
Xuan Zhuo <xuanzhuo@...ux.alibaba.com>
CC: <virtualization@...ts.linux-foundation.org>,
Dragos Tatulea <dtatulea@...dia.com>,
<linux-kernel@...r.kernel.org>
Subject: [PATCH vhost 07/16] vdpa/mlx5: Take cvq iotlb lock during refresh
The reslock is taken while refresh is called but iommu_lock is more
specific to this resource. So take the iommu_lock during cvq iotlb
refresh.
Based on Eugenio's patch [0].
[0] https://lore.kernel.org/lkml/20230112142218.725622-4-eperezma@redhat.com/
Acked-by: Jason Wang <jasowang@...hat.com>
Suggested-by: Eugenio PĂ©rez <eperezma@...hat.com>
Signed-off-by: Dragos Tatulea <dtatulea@...dia.com>
---
drivers/vdpa/mlx5/core/mr.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/drivers/vdpa/mlx5/core/mr.c b/drivers/vdpa/mlx5/core/mr.c
index fcb6ae32e9ed..587300e7c18e 100644
--- a/drivers/vdpa/mlx5/core/mr.c
+++ b/drivers/vdpa/mlx5/core/mr.c
@@ -590,11 +590,19 @@ int mlx5_vdpa_update_cvq_iotlb(struct mlx5_vdpa_dev *mvdev,
struct vhost_iotlb *iotlb,
unsigned int asid)
{
+ int err;
+
if (mvdev->group2asid[MLX5_VDPA_CVQ_GROUP] != asid)
return 0;
+ spin_lock(&mvdev->cvq.iommu_lock);
+
prune_iotlb(mvdev);
- return dup_iotlb(mvdev, iotlb);
+ err = dup_iotlb(mvdev, iotlb);
+
+ spin_unlock(&mvdev->cvq.iommu_lock);
+
+ return err;
}
int mlx5_vdpa_create_dma_mr(struct mlx5_vdpa_dev *mvdev)
--
2.41.0
Powered by blists - more mailing lists