[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211110054121-mutt-send-email-mst@kernel.org>
Date: Wed, 10 Nov 2021 05:50:04 -0500
From: "Michael S. Tsirkin" <mst@...hat.com>
To: "Wang, Wei W" <wei.w.wang@...el.com>
Cc: "sgarzare@...hat.com" <sgarzare@...hat.com>,
"davem@...emloft.net" <davem@...emloft.net>,
"kuba@...nel.org" <kuba@...nel.org>,
Stefan Hajnoczi <stefanha@...hat.com>,
Paolo Bonzini <pbonzini@...hat.com>,
"kys@...rosoft.com" <kys@...rosoft.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"virtualization@...ts.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>,
"Nakajima, Jun" <jun.nakajima@...el.com>,
"Kleen, Andi" <andi.kleen@...el.com>
Subject: Re: [RFC] hypercall-vsock: add a new vsock transport
On Wed, Nov 10, 2021 at 07:12:36AM +0000, Wang, Wei W wrote:
> Hi,
>
>
>
> We plan to add a new vsock transport based on hypercall (e.g. vmcall on Intel
> CPUs).
>
> It transports AF_VSOCK packets between the guest and host, which is similar to
>
> virtio-vsock, vmci-vsock and hyperv-vsock.
>
>
>
> Compared to the above listed vsock transports which are designed for high
> performance,
>
> the main advantages of hypercall-vsock are:
>
> 1) It is VMM agnostic. For example, one guest working on hypercall-vsock
> can run on
>
> either KVM, Hyperv, or VMware.
hypercalls are fundamentally hypervisor dependent though.
Assuming you can carve up a hypervisor independent hypercall,
using it for something as mundane and specific as vsock for TDX
seems like a huge overkill. For example, virtio could benefit from
faster vmexits that hypercalls give you for signalling.
How about a combination of virtio-mmio and hypercalls for fast-path
signalling then?
> 2) It is simpler. It doesn’t rely on any complex bus enumeration
>
> (e.g. virtio-pci based vsock device may need the whole implementation of PCI).
>
Next thing people will try to do is implement a bunch of other device on
top of it. virtio used pci simply because everyone implements pci. And
the reason for *that* is because implementing a basic pci bus is dead
simple, whole of pci.c in qemu is <3000 LOC.
>
> An example usage is the communication between MigTD and host (Page 8 at
>
> https://static.sched.com/hosted_files/kvmforum2021/ef/
> TDX%20Live%20Migration_Wei%20Wang.pdf).
>
> MigTD communicates to host to assist the migration of the target (user) TD.
>
> MigTD is part of the TCB, so its implementation is expected to be as simple as
> possible
>
> (e.g. bare mental implementation without OS, no PCI driver support).
>
>
Try to list drawbacks? For example, passthrough for nested virt
isn't possible unlike pci, neither are hardware implementations.
> Looking forward to your feedbacks.
>
>
>
> Thanks,
>
> Wei
>
Powered by blists - more mailing lists