lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 20 Apr 2018 11:52:47 +0800
From:   Jason Wang <jasowang@...hat.com>
To:     "Michael S. Tsirkin" <mst@...hat.com>
Cc:     Tiwei Bie <tiwei.bie@...el.com>, alex.williamson@...hat.com,
        ddutile@...hat.com, alexander.h.duyck@...el.com,
        virtio-dev@...ts.oasis-open.org, linux-kernel@...r.kernel.org,
        kvm@...r.kernel.org, virtualization@...ts.linux-foundation.org,
        netdev@...r.kernel.org, dan.daly@...el.com,
        cunming.liang@...el.com, zhihong.wang@...el.com,
        jianfeng.tan@...el.com, xiao.w.wang@...el.com
Subject: Re: [RFC] vhost: introduce mdev based hardware vhost backend



On 2018年04月20日 02:40, Michael S. Tsirkin wrote:
> On Tue, Apr 10, 2018 at 03:25:45PM +0800, Jason Wang wrote:
>>>>> One problem is that, different virtio ring compatible devices
>>>>> may have different device interfaces. That is to say, we will
>>>>> need different drivers in QEMU. It could be troublesome. And
>>>>> that's what this patch trying to fix. The idea behind this
>>>>> patch is very simple: mdev is a standard way to emulate device
>>>>> in kernel.
>>>> So you just move the abstraction layer from qemu to kernel, and you still
>>>> need different drivers in kernel for different device interfaces of
>>>> accelerators. This looks even more complex than leaving it in qemu. As you
>>>> said, another idea is to implement userspace vhost backend for accelerators
>>>> which seems easier and could co-work with other parts of qemu without
>>>> inventing new type of messages.
>>> I'm not quite sure. Do you think it's acceptable to
>>> add various vendor specific hardware drivers in QEMU?
>>>
>> I don't object but we need to figure out the advantages of doing it in qemu
>> too.
>>
>> Thanks
> To be frank kernel is exactly where device drivers belong.  DPDK did
> move them to userspace but that's merely a requirement for data path.
> *If* you can have them in kernel that is best:
> - update kernel and there's no need to rebuild userspace

Well, you still need to rebuild userspace since a new vhost backend is 
required which relies vhost protocol through mdev API. And I believe 
upgrading userspace package is considered to be more lightweight than 
upgrading kernel. With mdev, we're likely to repeat the story of vhost 
API, dealing with features/versions and inventing new API endless for 
new features. And you will still need to rebuild the userspace.

> - apps can be written in any language no need to maintain multiple
>    libraries or add wrappers

This is not a big issue consider It's not a generic network driver but a 
mdev driver, the only possible user is VM.

> - security concerns are much smaller (ok people are trying to
>    raise the bar with IOMMUs and such, but it's already pretty
>    good even without)

Well, I think not, kernel bugs are much more serious than userspace 
ones. And I beg the kernel driver itself won't be small.

>
> The biggest issue is that you let userspace poke at the
> device which is also allowed by the IOMMU to poke at
> kernel memory (needed for kernel driver to work).

I don't quite get. The userspace driver could be built on top of VFIO 
for sure. So kernel memory were perfectly isolated in this case.

>
> Yes, maybe if device is not buggy it's all fine, but
> it's better if we do not have to trust the device
> otherwise the security picture becomes more murky.
>
> I suggested attaching a PASID to (some) queues - see my old post "using
> PASIDs to enable a safe variant of direct ring access".
>
> Then using IOMMU with VFIO to limit access through queue to corrent
> ranges of memory.

Well userspace driver could benefit from this too. And we can even go 
further by using nested IO page tables to share IOVA address space 
between devices and a VM.

Thanks

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ