[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Z4qv9VTs0CZ0zoxW@google.com>
Date: Fri, 17 Jan 2025 11:31:01 -0800
From: Sean Christopherson <seanjc@...gle.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: Paolo Bonzini <pbonzini@...hat.com>, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
Tom Lendacky <thomas.lendacky@....com>, Isaku Yamahata <isaku.yamahata@...el.com>,
Kai Huang <kai.huang@...el.com>, Xiaoyao Li <xiaoyao.li@...el.com>,
Adrian Hunter <adrian.hunter@...el.com>, Rick Edgecombe <rick.p.edgecombe@...el.com>
Subject: Re: [PATCH v4 0/6] KVM: x86: Prep KVM hypercall handling for TDX
On Wed, Jan 15, 2025, Binbin Wu wrote:
> On 12/19/2024 10:40 AM, Sean Christopherson wrote:
> > On Wed, 27 Nov 2024 16:43:38 -0800, Sean Christopherson wrote:
> > > Effectively v4 of Binbin's series to handle hypercall exits to userspace in
> > > a generic manner, so that TDX
> > >
> > > Binbin and Kai, this is fairly different that what we last discussed. While
> > > sorting through Binbin's latest patch, I stumbled on what I think/hope is an
> > > approach that will make life easier for TDX. Rather than have common code
> > > set the return value, _and_ have TDX implement a callback to do the same for
> > > user return MSRs, just use the callback for all paths.
> > >
> > > [...]
> > Applied patch 1 to kvm-x86 fixes. I'm going to hold off on the rest until the
> > dust settles on the SEAMCALL interfaces, e.g. in case TDX ends up marshalling
> > state into the "normal" GPRs.
> Hi Sean, Based on your suggestions in the link
> https://lore.kernel.org/kvm/Z1suNzg2Or743a7e@google.com, the v2 of "KVM: TDX:
> TDX hypercalls may exit to userspace" is planned to morph the TDG.VP.VMCALL
> with KVM hypercall to EXIT_REASON_VMCALL and marshall r10~r14 from
> vp_enter_args in struct vcpu_tdx to the appropriate x86 registers for KVM
> hypercall handling.
...
> To test TDX, I made some modifications to your patch
> "KVM: x86: Refactor __kvm_emulate_hypercall() into a macro"
> Are the following changes make sense to you?
Yes, but I think we can go a step further and effectively revert the bulk of commit
e913ef159fad ("KVM: x86: Split core of hypercall emulation to helper function"),
i.e. have ____kvm_emulate_hypercall() read the GPRs instead of passing them in
via the macro.
> diff --git a/arch/x86/kvm/x86.c b/arch/x86/kvm/x86.c
> index a2198807290b..2c5df57ad799 100644
> --- a/arch/x86/kvm/x86.c
> +++ b/arch/x86/kvm/x86.c
> @@ -10088,9 +10088,7 @@ int kvm_emulate_hypercall(struct kvm_vcpu *vcpu)
> if (kvm_hv_hypercall_enabled(vcpu))
> return kvm_hv_hypercall(vcpu);
> - return __kvm_emulate_hypercall(vcpu, rax, rbx, rcx, rdx, rsi,
> - is_64_bit_hypercall(vcpu),
> - kvm_x86_call(get_cpl)(vcpu),
> + return __kvm_emulate_hypercall(vcpu, kvm_x86_call(get_cpl)(vcpu),
> complete_hypercall_exit);
> }
> EXPORT_SYMBOL_GPL(kvm_emulate_hypercall);
> diff --git a/arch/x86/kvm/x86.h b/arch/x86/kvm/x86.h
> index b00ecbfef000..989bed5b48b0 100644
> --- a/arch/x86/kvm/x86.h
> +++ b/arch/x86/kvm/x86.h
> @@ -623,19 +623,18 @@ int ____kvm_emulate_hypercall(struct kvm_vcpu *vcpu, unsigned long nr,
> int op_64_bit, int cpl,
> int (*complete_hypercall)(struct kvm_vcpu *));
> -#define __kvm_emulate_hypercall(_vcpu, nr, a0, a1, a2, a3, op_64_bit, cpl, complete_hypercall) \
> -({ \
> - int __ret; \
> - \
> - __ret = ____kvm_emulate_hypercall(_vcpu, \
> - kvm_##nr##_read(_vcpu), kvm_##a0##_read(_vcpu), \
> - kvm_##a1##_read(_vcpu), kvm_##a2##_read(_vcpu), \
> - kvm_##a3##_read(_vcpu), op_64_bit, cpl, \
> - complete_hypercall); \
> - \
> - if (__ret > 0) \
> - __ret = complete_hypercall(_vcpu); \
> - __ret; \
> +#define __kvm_emulate_hypercall(_vcpu, cpl, complete_hypercall) \
> +({ \
> + int __ret; \
> + __ret = ____kvm_emulate_hypercall(_vcpu, kvm_rax_read(_vcpu), \
> + kvm_rbx_read(_vcpu), kvm_rcx_read(_vcpu), \
> + kvm_rdx_read(_vcpu), kvm_rsi_read(_vcpu), \
> + is_64_bit_hypercall(_vcpu), cpl, \
> + complete_hypercall); \
> + \
> + if (__ret > 0) \
> + __ret = complete_hypercall(_vcpu); \
> + __ret; \
> })
> int kvm_emulate_hypercall(struct kvm_vcpu *vcpu);
>
>
Powered by blists - more mailing lists