[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CACGkMEt8TL=fc6tUfb7BTXTSsLZa=K39znJcFqpNhy7Oaw5s6A@mail.gmail.com>
Date: Fri, 17 Feb 2023 13:58:21 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Longpeng (Mike, Cloud Infrastructure Service Product Dept.)"
<longpeng2@...wei.com>
Cc: mst@...hat.com, arei.gonglei@...wei.com, yechuan@...wei.com,
huangzhichao@...wei.com, virtualization@...ts.linux-foundation.org,
linux-kernel@...r.kernel.org, kvm@...r.kernel.org
Subject: Re: [PATCH] vhost-vdpa: cleanup memory maps when closing vdpa fds
On Wed, Feb 15, 2023 at 1:16 PM Longpeng (Mike, Cloud Infrastructure
Service Product Dept.) <longpeng2@...wei.com> wrote:
>
>
>
> 在 2023/2/15 10:56, Jason Wang 写道:
> > On Wed, Feb 15, 2023 at 10:49 AM Longpeng (Mike, Cloud Infrastructure
> > Service Product Dept.) <longpeng2@...wei.com> wrote:
> >>
> >>
> >>
> >> 在 2023/2/15 10:00, Jason Wang 写道:
> >>> On Tue, Feb 14, 2023 at 2:28 PM Longpeng (Mike, Cloud Infrastructure
> >>> Service Product Dept.) <longpeng2@...wei.com> wrote:
> >>>>
> >>>>
> >>>>
> >>>> 在 2023/2/14 14:16, Jason Wang 写道:
> >>>>>
> >>>>> 在 2023/1/31 22:53, Longpeng(Mike) 写道:
> >>>>>> From: Longpeng <longpeng2@...wei.com>
> >>>>>>
> >>>>>> We must cleanup all memory maps when closing the vdpa fds, otherwise
> >>>>>> some critical resources (e.g. memory, iommu map) will leaked if the
> >>>>>> userspace exits unexpectedly (e.g. kill -9).
> >>>>>
> >>>>>
> >>>>> Sounds like a bug of the kernel, should we fix there?
> >>>>>
> >>>>
> >>>> For example, the iommu map is setup when QEMU calls VHOST_IOTLB_UPDATE
> >>>> ioctl and it'll be freed if QEMU calls VHOST_IOTLB_INVALIDATE ioctl.
> >>>>
> >>>> So maybe we release these resources in vdpa framework in kernel is a
> >>>> suitable choice?
> >>>
> >>> I think I need understand what does "resources" mean here:
> >>>
> >>> For iommu mapping, it should be freed by vhost_vdpa_free_domain() in
> >>> vhost_vdpa_release()?
> >>>
> >>
> >> Please consider the following lifecycle of the vdpa device:
> >>
> >> 1. vhost_vdpa_open
> >> vhost_vdpa_alloc_domain
> >>
> >> 2. vhost_vdpa_pa_map
> >> pin_user_pages
> >> vhost_vdpa_map
> >> iommu_map
> >>
> >> 3. kill QEMU
> >>
> >> 4. vhost_vdpa_release
> >> vhost_vdpa_free_domain
> >>
> >> In this case, we have no opportunity to invoke unpin_user_pages or
> >> iommu_unmap to free the memory.
> >
> > We do:
> >
> > vhost_vdpa_cleanup()
> > vhost_vdpa_remove_as()
> > vhost_vdpa_iotlb_unmap()
> > vhost_vdpa_pa_unmap()
> > unpin_user_pages()
> > vhost_vdpa_general_unmap()
> > iommu_unmap()
> > ?
> >
> Oh, my codebase is linux-6.2-rc2 and the commit c070c1912a8 (vhost-vdpa:
> fix an iotlb memory leak) already fixed this bug in linux-6.2-rc3.
>
> > Btw, it looks like we should call vhost_vdpa_free_domain() *after*
> > vhost_vdpa_cleanup() otherwise it's a UAF?
> >
> I think so, the v->domain is set to NULL in vhost_vdpa_free_domain(), it
> seems would trigger null-pointer access in my case.
Patch are welcomed.
Thanks
>
> > Thanks
> >
> >>
> >>> static int vhost_vdpa_release(struct inode *inode, struct file *filep)
> >>> {
> >>> struct vhost_vdpa *v = filep->private_data;
> >>> struct vhost_dev *d = &v->vdev;
> >>>
> >>> mutex_lock(&d->mutex);
> >>> filep->private_data = NULL;
> >>> vhost_vdpa_clean_irq(v);
> >>> vhost_vdpa_reset(v);
> >>> vhost_dev_stop(&v->vdev);
> >>> vhost_vdpa_free_domain(v);
> >>> vhost_vdpa_config_put(v);
> >>> vhost_vdpa_cleanup(v);
> >>> mutex_unlock(&d->mutex);
> >>>
> >>> atomic_dec(&v->opened);
> >>> complete(&v->completion);
> >>>
> >>> return 0;
> >>> }
> >>>
> >>>>
> >>>> By the way, Jason, can you reproduce the problem in your machine?
> >>>>
> >>>
> >>> Haven't got time in doing this but it should be the responsibility of
> >>> the author to validate this anyhow.
> >>>
> >>> Thanks
> >>>
> >>>>> Thanks
> >>>>>
> >>>>>
> >>>>>>
> >>>>>> Signed-off-by: Longpeng <longpeng2@...wei.com>
> >>>>>> ---
> >>>>>> drivers/vhost/vdpa.c | 13 +++++++++++++
> >>>>>> 1 file changed, 13 insertions(+)
> >>>>>>
> >>>>>> diff --git a/drivers/vhost/vdpa.c b/drivers/vhost/vdpa.c
> >>>>>> index a527eeeac637..37477cffa5aa 100644
> >>>>>> --- a/drivers/vhost/vdpa.c
> >>>>>> +++ b/drivers/vhost/vdpa.c
> >>>>>> @@ -823,6 +823,18 @@ static void vhost_vdpa_unmap(struct vhost_vdpa *v,
> >>>>>> vhost_vdpa_remove_as(v, asid);
> >>>>>> }
> >>>>>> +static void vhost_vdpa_clean_map(struct vhost_vdpa *v)
> >>>>>> +{
> >>>>>> + struct vhost_vdpa_as *as;
> >>>>>> + u32 asid;
> >>>>>> +
> >>>>>> + for (asid = 0; asid < v->vdpa->nas; asid++) {
> >>>>>> + as = asid_to_as(v, asid);
> >>>>>> + if (as)
> >>>>>> + vhost_vdpa_unmap(v, &as->iotlb, 0ULL, 0ULL - 1);
> >>>>>> + }
> >>>>>> +}
> >>>>>> +
> >>>>>> static int vhost_vdpa_va_map(struct vhost_vdpa *v,
> >>>>>> struct vhost_iotlb *iotlb,
> >>>>>> u64 iova, u64 size, u64 uaddr, u32 perm)
> >>>>>> @@ -1247,6 +1259,7 @@ static int vhost_vdpa_release(struct inode
> >>>>>> *inode, struct file *filep)
> >>>>>> vhost_vdpa_clean_irq(v);
> >>>>>> vhost_vdpa_reset(v);
> >>>>>> vhost_dev_stop(&v->vdev);
> >>>>>> + vhost_vdpa_clean_map(v);
> >>>>>> vhost_vdpa_free_domain(v);
> >>>>>> vhost_vdpa_config_put(v);
> >>>>>> vhost_vdpa_cleanup(v);
> >>>>>
> >>>>> .
> >>>>
> >>>
> >>> .
> >>
> >
> > .
>
Powered by blists - more mailing lists