[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <568B35F8.7080302@gmail.com>
Date: Tue, 5 Jan 2016 11:18:16 +0800
From: Yang Zhang <yang.zhang.wz@...il.com>
To: Jason Wang <jasowang@...hat.com>, mst@...hat.com,
kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-api@...r.kernel.org
Subject: Re: [PATCH RFC] vhost: basic device IOTLB support
On 2016/1/4 14:22, Jason Wang wrote:
>
>
> On 01/04/2016 09:39 AM, Yang Zhang wrote:
>> On 2015/12/31 15:13, Jason Wang wrote:
>>> This patch tries to implement an device IOTLB for vhost. This could be
>>> used with for co-operation with userspace(qemu) implementation of
>>> iommu for a secure DMA environment in guest.
>>>
>>> The idea is simple. When vhost meets an IOTLB miss, it will request
>>> the assistance of userspace to do the translation, this is done
>>> through:
>>>
>>> - Fill the translation request in a preset userspace address (This
>>> address is set through ioctl VHOST_SET_IOTLB_REQUEST_ENTRY).
>>> - Notify userspace through eventfd (This eventfd was set through ioctl
>>> VHOST_SET_IOTLB_FD).
>>>
>>> When userspace finishes the translation, it will update the vhost
>>> IOTLB through VHOST_UPDATE_IOTLB ioctl. Userspace is also in charge of
>>> snooping the IOTLB invalidation of IOMMU IOTLB and use
>>> VHOST_UPDATE_IOTLB to invalidate the possible entry in vhost.
>>
>> Is there any performance data shows the difference with IOTLB supporting?
>
> Basic testing show it was slower than without IOTLB.
>
>> I doubt we may see performance decrease since the flush code path is
>> longer than before.
>>
>
> Yes, it also depend on the TLB hit rate.
>
> If lots of dynamic mappings and unmappings are used in guest (e.g normal
> Linux driver). This method should be much more slower since:
>
> - lots of invalidation and its path is slow.
> - the hit rate is low and the high price of userspace assisted address
> translation.
> - limitation of userspace IOMMU/IOTLB implementation (qemu's vtd
> emulation simply empty all entries when it's full).
>
> Another method is to implement kernel IOMMU (e.g vtd). But I'm not sure
> vhost is the best place to do this, since vhost should be architecture
> independent. Maybe we'd better to do it in kvm or have a pv IOMMU
> implementation in vhost.
Actually, i have the kernel IOMMU(virtual vtd) patch which can pass
though the physical device to L2 guest on hand. But it is just a draft
patch which was written several years ago. If there is real requirement
for it, I can rebase it and send out it for review.
>
> Another side, if fixed mappings were used in guest, (e.g dpdk in guest).
> We have the possibility to have 100% hit rate with almost no
> invalidation, the performance penalty should be ignorable, this should
> be the main use case for this patch.
>
> The patch is just a prototype for discussion. Any other ideas are welcomed.
>
> Thanks
>
--
best regards
yang
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists