[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZW94T8Fx2eJpwKQS@google.com>
Date: Tue, 5 Dec 2023 11:21:51 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Nicolas Saenz Julienne <nsaenz@...zon.com>
Cc: Maxim Levitsky <mlevitsk@...hat.com>, kvm@...r.kernel.org,
linux-kernel@...r.kernel.org, linux-hyperv@...r.kernel.org,
pbonzini@...hat.com, vkuznets@...hat.com, anelkz@...zon.com,
graf@...zon.com, dwmw@...zon.co.uk, jgowans@...zon.com,
kys@...rosoft.com, haiyangz@...rosoft.com, decui@...rosoft.com,
x86@...nel.org, linux-doc@...r.kernel.org
Subject: Re: [RFC 05/33] KVM: x86: hyper-v: Introduce VTL call/return
prologues in hypercall page
On Fri, Dec 01, 2023, Nicolas Saenz Julienne wrote:
> On Fri Dec 1, 2023 at 5:47 PM UTC, Sean Christopherson wrote:
> > On Fri, Dec 01, 2023, Nicolas Saenz Julienne wrote:
> > > On Fri Dec 1, 2023 at 4:32 PM UTC, Sean Christopherson wrote:
> > > > On Fri, Dec 01, 2023, Nicolas Saenz Julienne wrote:
> > > > > > To support this I think that we can add a userspace msr filter on the HV_X64_MSR_HYPERCALL,
> > > > > > although I am not 100% sure if a userspace msr filter overrides the in-kernel msr handling.
> > > > >
> > > > > I thought about it at the time. It's not that simple though, we should
> > > > > still let KVM set the hypercall bytecode, and other quirks like the Xen
> > > > > one.
> > > >
> > > > Yeah, that Xen quirk is quite the killer.
> > > >
> > > > Can you provide pseudo-assembly for what the final page is supposed to look like?
> > > > I'm struggling mightily to understand what this is actually trying to do.
> > >
> > > I'll make it as simple as possible (diregard 32bit support and that xen
> > > exists):
> > >
> > > vmcall <- Offset 0, regular Hyper-V hypercalls enter here
> > > ret
> > > mov rax,rcx <- VTL call hypercall enters here
> >
> > I'm missing who/what defines "here" though. What generates the CALL that points
> > at this exact offset? If the exact offset is dictated in the TLFS, then aren't
> > we screwed with the whole Xen quirk, which inserts 5 bytes before that first VMCALL?
>
> Yes, sorry, I should've included some more context.
>
> Here's a rundown (from memory) of how the first VTL call happens:
> - CPU0 start running at VTL0.
> - Hyper-V enables VTL1 on the partition.
> - Hyper-V enabled VTL1 on CPU0, but doesn't yet switch to it. It passes
> the initial VTL1 CPU state alongside the enablement hypercall
> arguments.
> - Hyper-V sets the Hypercall page overlay address through
> HV_X64_MSR_HYPERCALL. KVM fills it.
> - Hyper-V gets the VTL-call and VTL-return offset into the hypercall
> page using the VP Register HvRegisterVsmCodePageOffsets (VP register
> handling is in user-space).
Ah, so the guest sets the offsets by "writing" HvRegisterVsmCodePageOffsets via
a HvSetVpRegisters() hypercall.
I don't see a sane way to handle this in KVM if userspace handles HvSetVpRegisters().
E.g. if the guest requests offsets that don't leave enough room for KVM to shove
in its data, then presumably userspace needs to reject HvSetVpRegisters(). But
that requires userspace to know exactly how many bytes KVM is going to write at
each offsets.
My vote is to have userspace do all the patching. IIUC, all of this is going to
be mutually exclusive with kvm_xen_hypercall_enabled(), i.e. userspace doesn't
need to worry about setting RAX[31]. At that point, it's just VMCALL versus
VMMCALL, and userspace is more than capable of identifying whether its running
on Intel or AMD.
> - Hyper-V performs the first VTL-call, and has all it needs to move
> between VTL0/1.
>
> Nicolas
Powered by blists - more mailing lists