[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJt8fjfNQYIV9wVk@google.com>
Date: Tue, 27 Jun 2023 17:19:10 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: Binbin Wu <binbin.wu@...ux.intel.com>
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
pbonzini@...hat.com, chao.gao@...el.com, kai.huang@...el.com,
David.Laight@...lab.com, robert.hu@...ux.intel.com
Subject: Re: [PATCH v9 5/6] KVM: x86: Untag address when LAM applicable
On Tue, Jun 06, 2023, Binbin Wu wrote:
> Untag address for 64-bit memory/MMIO operand in instruction emulations
> and VMExit handlers when LAM is applicable.
>
> For instruction emulation, untag address in __linearize() before
> canonical check. LAM doesn't apply to addresses used for instruction
> fetches or to those that specify the targets of jump and call instructions,
> use X86EMUL_F_SKIPLAM to skip LAM untag.
>
> For VMExit handlers related to 64-bit linear address:
> - Cases need to untag address
> Operand(s) of VMX instructions and INVPCID.
> Operand(s) of SGX ENCLS.
> - Cases LAM doesn't apply to
> Operand of INVLPG.
> Linear address in INVPCID descriptor (no change needed).
> Linear address in INVVPID descriptor (it has been confirmed, although it is
> not called out in LAM spec, no change needed).
> BASEADDR specified in SESC of ECREATE (no change needed).
>
> Note:
> LAM doesn't apply to the writes to control registers or MSRs.
> LAM masking applies before paging, so the faulting linear address in CR2
> doesn't contain the metadata.
> The guest linear address saved in VMCS doesn't contain metadata.
>
> Co-developed-by: Robert Hoo <robert.hu@...ux.intel.com>
> Signed-off-by: Robert Hoo <robert.hu@...ux.intel.com>
> Signed-off-by: Binbin Wu <binbin.wu@...ux.intel.com>
> Reviewed-by: Chao Gao <chao.gao@...el.com>
> Tested-by: Xuelian Guo <xuelian.guo@...el.com>
> ---
> arch/x86/kvm/emulate.c | 16 +++++++++++++---
> arch/x86/kvm/kvm_emulate.h | 2 ++
> arch/x86/kvm/vmx/nested.c | 2 ++
> arch/x86/kvm/vmx/sgx.c | 1 +
> arch/x86/kvm/x86.c | 7 +++++++
> 5 files changed, 25 insertions(+), 3 deletions(-)
>
> diff --git a/arch/x86/kvm/emulate.c b/arch/x86/kvm/emulate.c
> index e89afc39e56f..c135adb26f1e 100644
> --- a/arch/x86/kvm/emulate.c
> +++ b/arch/x86/kvm/emulate.c
> @@ -701,6 +701,7 @@ static __always_inline int __linearize(struct x86_emulate_ctxt *ctxt,
> *max_size = 0;
> switch (mode) {
> case X86EMUL_MODE_PROT64:
> + ctxt->ops->untag_addr(ctxt, &la, flags);
> *linear = la;
Ha! Returning the untagged address does help:
*linear = ctx->ops->get_untagged_address(ctxt, la, flags);
> va_bits = ctxt_virt_addr_bits(ctxt);
> if (!__is_canonical_address(la, va_bits))
> @@ -771,8 +772,12 @@ static inline int assign_eip(struct x86_emulate_ctxt *ctxt, ulong dst)
>
> if (ctxt->op_bytes != sizeof(unsigned long))
> addr.ea = dst & ((1UL << (ctxt->op_bytes << 3)) - 1);
> + /*
> + * LAM doesn't apply to addresses that specify the targets of jump and
> + * call instructions.
> + */
> rc = __linearize(ctxt, addr, &max_size, 1, ctxt->mode, &linear,
> - X86EMUL_F_FETCH);
> + X86EMUL_F_FETCH | X86EMUL_F_SKIPLAM);
No need for anything LAM specific here, just skip all FETCH access (unlike LASS
which skips checks only for branch targets).
> - rc = linearize(ctxt, ctxt->src.addr.mem, 1, false, &linear);
> + /* LAM doesn't apply to invlpg */
Comment unneeded if X86EMUL_F_INVLPG is added.
Powered by blists - more mailing lists