[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <17e3312e-686b-c5dd-852d-e9ffb7f4c707@redhat.com>
Date: Fri, 16 Apr 2021 11:02:57 +0800
From: Jason Wang <jasowang@...hat.com>
To: Yongji Xie <xieyongji@...edance.com>
Cc: Stefan Hajnoczi <stefanha@...hat.com>,
"Michael S. Tsirkin" <mst@...hat.com>,
Stefano Garzarella <sgarzare@...hat.com>,
Parav Pandit <parav@...dia.com>,
Christoph Hellwig <hch@...radead.org>,
Christian Brauner <christian.brauner@...onical.com>,
Randy Dunlap <rdunlap@...radead.org>,
Matthew Wilcox <willy@...radead.org>, viro@...iv.linux.org.uk,
Jens Axboe <axboe@...nel.dk>, bcrl@...ck.org,
Jonathan Corbet <corbet@....net>,
Mika Penttilä <mika.penttila@...tfour.com>,
Dan Carpenter <dan.carpenter@...cle.com>,
virtualization@...ts.linux-foundation.org, netdev@...r.kernel.org,
kvm@...r.kernel.org, linux-fsdevel@...r.kernel.org
Subject: Re: [PATCH v6 10/10] Documentation: Add documentation for VDUSE
在 2021/4/16 上午10:58, Yongji Xie 写道:
> On Fri, Apr 16, 2021 at 10:20 AM Jason Wang <jasowang@...hat.com> wrote:
>>
>> 在 2021/4/15 下午7:17, Yongji Xie 写道:
>>> On Thu, Apr 15, 2021 at 5:05 PM Jason Wang <jasowang@...hat.com> wrote:
>>>> 在 2021/4/15 下午4:36, Jason Wang 写道:
>>>>>> Please state this explicitly at the start of the document. Existing
>>>>>> interfaces like FUSE are designed to avoid trusting userspace.
>>>>> There're some subtle difference here. VDUSE present a device to kernel
>>>>> which means IOMMU is probably the only thing to prevent a malicous
>>>>> device.
>>>>>
>>>>>
>>>>>> Therefore
>>>>>> people might think the same is the case here. It's critical that people
>>>>>> are aware of this before deploying VDUSE with virtio-vdpa.
>>>>>>
>>>>>> We should probably pause here and think about whether it's possible to
>>>>>> avoid trusting userspace. Even if it takes some effort and costs some
>>>>>> performance it would probably be worthwhile.
>>>>> Since the bounce buffer is used the only attack surface is the
>>>>> coherent area, if we want to enforce stronger isolation we need to use
>>>>> shadow virtqueue (which is proposed in earlier version by me) in this
>>>>> case. But I'm not sure it's worth to do that.
>>>>
>>>> So this reminds me the discussion in the end of last year. We need to
>>>> make sure we don't suffer from the same issues for VDUSE at least
>>>>
>>>> https://yhbt.net/lore/all/c3629a27-3590-1d9f-211b-c0b7be152b32@redhat.com/T/#mc6b6e2343cbeffca68ca7a97e0f473aaa871c95b
>>>>
>>>> Or we can solve it at virtio level, e.g remember the dma address instead
>>>> of depending on the addr in the descriptor ring
>>>>
>>> I might miss something. But VDUSE has recorded the dma address during
>>> dma mapping, so we would not do bouncing if the addr/length is invalid
>>> during dma unmapping. Is it enough?
>>
>> E.g malicous device write a buggy dma address in the descriptor ring, so
>> we had:
>>
>> vring_unmap_one_split(desc->addr, desc->len)
>> dma_unmap_single()
>> vduse_dev_unmap_page()
>> vduse_domain_bounce()
>>
>> And in vduse_domain_bounce() we had:
>>
>> while (size) {
>> map = &domain->bounce_maps[iova >> PAGE_SHIFT];
>> offset = offset_in_page(iova);
>> sz = min_t(size_t, PAGE_SIZE - offset, size);
>>
>> This means we trust the iova which is dangerous and exacly the issue
>> mentioned in the above link.
>>
>> From VDUSE level need to make sure iova is legal.
>>
> I think we already do that in vduse_domain_bounce():
>
> while (size) {
> map = &domain->bounce_maps[iova >> PAGE_SHIFT];
>
> if (WARN_ON(!map->bounce_page ||
> map->orig_phys == INVALID_PHYS_ADDR))
> return;
So you don't check whether iova is legal before using it, so it's at
least a possible out of bound access of the bounce_maps[] isn't it? (e.g
what happens if iova is ULLONG_MAX).
>
>
>> From virtio level, we should not truse desc->addr.
>>
> We would not touch desc->addr after vring_unmap_one_split(). So I'm
> not sure what we need to do at the virtio level.
I think the point is to record the dma addres/len somewhere instead of
reading them from descriptor ring.
Thanks
>
> Thanks,
> Yongji
>
Powered by blists - more mailing lists