[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZJX6s2HxbHOUMXlj@google.com>
Date: Fri, 23 Jun 2023 13:04:03 -0700
From: Sean Christopherson <seanjc@...gle.com>
To: isaku.yamahata@...el.com
Cc: kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
isaku.yamahata@...il.com, Paolo Bonzini <pbonzini@...hat.com>,
erdemaktas@...gle.com, Sagi Shahar <sagis@...gle.com>,
David Matlack <dmatlack@...gle.com>,
Kai Huang <kai.huang@...el.com>,
Zhi Wang <zhi.wang.linux@...il.com>, chen.bo@...el.com,
linux-coco@...ts.linux.dev,
Chao Peng <chao.p.peng@...ux.intel.com>,
Ackerley Tng <ackerleytng@...gle.com>,
Vishal Annapurve <vannapurve@...gle.com>,
Michael Roth <michael.roth@....com>
Subject: Re: [RFC PATCH v2 4/6] KVM: x86: Introduce fault type to indicate kvm
page fault is private
On Thu, Jun 22, 2023, isaku.yamahata@...el.com wrote:
> diff --git a/arch/x86/kvm/mmu/mmu_internal.h b/arch/x86/kvm/mmu/mmu_internal.h
> index 7f9ec1e5b136..0ec0b927a391 100644
> --- a/arch/x86/kvm/mmu/mmu_internal.h
> +++ b/arch/x86/kvm/mmu/mmu_internal.h
> @@ -188,6 +188,13 @@ static inline bool is_nx_huge_page_enabled(struct kvm *kvm)
> return READ_ONCE(nx_huge_pages) && !kvm->arch.disable_nx_huge_pages;
> }
>
> +enum kvm_fault_type {
> + KVM_FAULT_MEM_ATTR,
> + KVM_FAULT_SHARED,
> + KVM_FAULT_SHARED_ALWAYS,
> + KVM_FAULT_PRIVATE,
This is silly. Just use AMD's error code bit, i.e. PFERR_GUEST_ENC_MASK as per
the SNP series.
Bit 34 (ENC): Set to 1 if the guest’s effective C-bit was 1, 0 otherwise.
Just because Intel's ucode is too crusty to support error codes larger than 16
bits doesn't mean KVM can't utilize the bits :-) KVM already translates to AMD's
error codes for other things, e.g.
error_code |= (exit_qualification & EPT_VIOLATION_GVA_TRANSLATED) != 0 ?
PFERR_GUEST_FINAL_MASK : PFERR_GUEST_PAGE_MASK;
For TDX, handle_ept_violation() can do something like:
if (is_tdx(vcpu->kvm))
error_code |= (gpa & shared) ? 0 : PFERR_GUEST_ENC_MASK;
else if (kvm_mem_is_private(vcpu->kvm, gpa_to_gfn(gpa)))
error_code |= PFERR_GUEST_ENC_MASK;
And that's not even taking into account that TDX might have a separate entry point,
i.e. the "is_tdx()" check can probably be avoided.
As for optimizing kvm_mem_is_private() to avoid unnecessary xarray lookups, that
can and should be done separately, e.g.
static inline bool kvm_mem_is_private(struct kvm *kvm, gfn_t gfn)
{
return IS_ENABLED(CONFIG_KVM_PRIVATE_MEM) &&
kvm_guest_has_private_mem(kvm) &&
kvm_get_memory_attributes(kvm, gfn) & KVM_MEMORY_ATTRIBUTE_PRIVATE;
}
where x86's implementation of kvm_guest_has_private_mem() can be
#define kvm_guest_has_private_mem(kvm) (!!(kvm)->vm_type)
Powered by blists - more mailing lists