[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <CACGkMEtu=Xiqc1JJrRVZ40dGsP8su_USq3ZJAWKgb4QaA4F5xw@mail.gmail.com>
Date: Mon, 17 Apr 2023 11:31:50 +0800
From: Jason Wang <jasowang@...hat.com>
To: Cindy Lu <lulu@...hat.com>
Cc: mst@...hat.com, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] vhost_vdpa: fix unmap process in no-batch mode
On Sat, Apr 15, 2023 at 10:20 AM Cindy Lu <lulu@...hat.com> wrote:
>
> While using the no-batch mode with vIOMMU enabled
> Qemu will call a large memory to unmap. Much larger than the memory
> mapped to the kernel. The iotlb is NULL in the kernel and will return fail.
This patch looks good but I don't understand the above. I think it's
better to explain why such large unmap will lead to this error:
Is it a batched unmap or a [0, ULONG_MAX] map? How could we end up the NULL?
> Which causes failure.
> To fix this, we will not remove the AS while the iotlb->nmaps is 0.
> This will free in the vhost_vdpa_clean
>
> Signed-off-by: Cindy Lu <lulu@...hat.com>
Do we need a fix tag and does it need to go for -stable?
Thanks
> ---
> drivers/vhost/vdpa.c | 8 +-------
> 1 file changed, 1 insertion(+), 7 deletions(-)
>
> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> index 7be9d9d8f01c..74c7d1f978b7 100644
> --- a/drivers/vhost/vdpa.c
> +++ b/drivers/vhost/vdpa.c
> @@ -851,11 +851,7 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v,
> if (!v->in_batch)
> ops->set_map(vdpa, asid, iotlb);
> }
> - /* If we are in the middle of batch processing, delay the free
> - * of AS until BATCH_END.
> - */
> - if (!v->in_batch && !iotlb->nmaps)
> - vhost_vdpa_remove_as(v, asid);
> +
> }
>
> static int vhost_vdpa_va_map(struct vhost_vdpa *v,
> @@ -1112,8 +1108,6 @@ static int vhost_vdpa_process_iotlb_msg(struct vhost_dev *dev, u32 asid,
> if (v->in_batch && ops->set_map)
> ops->set_map(vdpa, asid, iotlb);
> v->in_batch = false;
> - if (!iotlb->nmaps)
> - vhost_vdpa_remove_as(v, asid);
> break;
> default:
> r = -EINVAL;
> --
> 2.34.3
>
Powered by blists - more mailing lists