[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <YmsTUGJfVzU3XTkl@google.com>
Date: Thu, 28 Apr 2022 22:21:04 +0000
From: Sean Christopherson <seanjc@...gle.com>
To: Peter Oskolkov <posk@...gle.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Wanpeng Li <wanpengli@...cent.com>,
Jim Mattson <jmattson@...gle.com>,
Joerg Roedel <joro@...tes.org>, kvm@...r.kernel.org,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>, x86@...nel.org,
"H . Peter Anvin" <hpa@...or.com>, linux-kernel@...r.kernel.org,
Paul Turner <pjt@...gle.com>, Peter Oskolkov <posk@...k.io>
Subject: Re: [PATCH] KVM: x86: add HC_VMM_CUSTOM hypercall
On Thu, Apr 28, 2022, Peter Oskolkov wrote:
> On Thu, Apr 21, 2022 at 10:14 AM Paolo Bonzini <pbonzini@...hat.com> wrote:
> >
> > On 4/21/22 18:51, Peter Oskolkov wrote:
> > > Allow kvm-based VMMs to request KVM to pass a custom vmcall
> > > from the guest to the VMM in the host.
> > >
> > > Quite often, operating systems research projects and/or specialized
> > > paravirtualized workloads would benefit from a extra-low-overhead,
> > > extra-low-latency guest-host communication channel.
> >
> > You can use a memory page and an I/O port. It should be as fast as a
> > hypercall. You can even change it to use ioeventfd if an asynchronous
> > channel is enough, and then it's going to be less than 1 us latency.
>
> So this function:
>
> uint8_t hyperchannel_ping(uint8_t arg)
> {
> uint8_t inb;
> uint16_t port = PORT;
>
> asm(
> "outb %[arg] , %[port] \n\t" // write arg
> "inb %[port], %[inb] \n\t" // read res
> : [inb] "=r"(inb)
> : [arg] "r"(arg), [port] "r"(port)
> );
> return inb;
> }
>
> takes about 5.5usec vs 2.5usec for a vmcall on the same
> hardware/kernel/etc. I've also tried AF_VSOCK, and a roundtrip there
> is 30-50usec.
>
> The main problem of port I/O vs a vmcall is that with port I/O a
> second VM exit is needed to return any result to the guest. Am I
> missing something?
The intent of the port I/O approach is that it's just a kick, the actual data
payload is delivered via a different memory channel.
0. guest/host establish a memory channel, e.g. guest annouces address to host at boot
1. guest writes parameters to the memory channel
2. guest does port I/O to let the host know there's work to be done
3. KVM exits to the host
4. host does the work, fills memory with the response
5. host does KVM_RUN to re-enter the guest
6. KVM runs the guest
7. guest reads the response from memory
This is what Paolo meant by "memory page".
Using an ioeventfd avoids the overhead of #3 and #5. Instead of exiting to
userspace, KVM signals the ioeventfd to wake the userspace I/O thread and immediately
resumes the guest. The catch is that if you want a synchronous response, the guest
will have to wait for the host I/O thread to service the request, at which point the
benefits of avoiding the exit to userspace are largely lost.
Things like virtio-net (and presumably other virtio devices?) take advantage of
ioeventfd by using a ring buffer, e.g. put a Tx payload in the buffer, kick the
host and move on.
Powered by blists - more mailing lists