[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <55b3d55a-950f-eeaf-1908-bed78a1a9200@redhat.com>
Date: Mon, 24 Dec 2018 11:43:31 +0800
From: Jason Wang <jasowang@...hat.com>
To: "Michael S. Tsirkin" <mst@...hat.com>
Cc: kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
netdev@...r.kernel.org, linux-kernel@...r.kernel.org,
Jintack Lim <jintack@...columbia.edu>
Subject: Re: [PATCH net V2 4/4] vhost: log dirty page correctly
On 2018/12/14 下午9:20, Michael S. Tsirkin wrote:
> On Fri, Dec 14, 2018 at 10:43:03AM +0800, Jason Wang wrote:
>> On 2018/12/13 下午10:31, Michael S. Tsirkin wrote:
>>>> Just to make sure I understand this. It looks to me we should:
>>>>
>>>> - allow passing GIOVA->GPA through UAPI
>>>>
>>>> - cache GIOVA->GPA somewhere but still use GIOVA->HVA in device IOTLB for
>>>> performance
>>>>
>>>> Is this what you suggest?
>>>>
>>>> Thanks
>>> Not really. We already have GPA->HVA, so I suggested a flag to pass
>>> GIOVA->GPA in the IOTLB.
>>>
>>> This has advantages for security since a single table needs
>>> then to be validated to ensure guest does not corrupt
>>> QEMU memory.
>>>
>> I wonder how much we can gain through this. Currently, qemu IOMMU gives
>> GIOVA->GPA mapping, and qemu vhost code will translate GPA to HVA then pass
>> GIOVA->HVA to vhost. It looks no difference to me.
>>
>> Thanks
> The difference is in security not in performance. Getting a bad HVA
> corrupts QEMU memory and it might be guest controlled. Very risky.
How can this be controlled by guest? HVA was generated from qemu ram
blocks which is totally under the control of qemu memory core instead of
guest.
Thanks
> If
> translations to HVA are done in a single place through a single table
> it's safer as there's a single risky place.
>
Powered by blists - more mailing lists