[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <ZiYo/giv0/sNhjX1@yzhao56-desk.sh.intel.com>
Date: Mon, 22 Apr 2024 17:08:14 +0800
From: Yan Zhao <yan.y.zhao@...el.com>
To: <isaku.yamahata@...el.com>
CC: <kvm@...r.kernel.org>, <linux-kernel@...r.kernel.org>,
<isaku.yamahata@...il.com>, Paolo Bonzini <pbonzini@...hat.com>,
<erdemaktas@...gle.com>, Sean Christopherson <seanjc@...gle.com>, Sagi Shahar
<sagis@...gle.com>, Kai Huang <kai.huang@...el.com>, <chen.bo@...el.com>,
<hang.yuan@...el.com>, <tina.zhang@...el.com>
Subject: Re: [PATCH v19 062/130] KVM: x86/tdp_mmu: Support TDX private
mapping for TDP MMU
On Mon, Feb 26, 2024 at 12:26:04AM -0800, isaku.yamahata@...el.com wrote:
> From: Isaku Yamahata <isaku.yamahata@...el.com>
>
> @@ -1041,6 +1255,8 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> struct kvm *kvm = vcpu->kvm;
> struct tdp_iter iter;
> struct kvm_mmu_page *sp;
> + gfn_t raw_gfn;
> + bool is_private = fault->is_private && kvm_gfn_shared_mask(kvm);
Why not put the checking of kvm_gfn_shared_mask() earlier before determining
fault->is_private?
e.g. in kvm_mmu_page_fault().
> int ret = RET_PF_RETRY;
>
> kvm_mmu_hugepage_adjust(vcpu, fault);
> @@ -1049,7 +1265,17 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
>
> rcu_read_lock();
>
> - tdp_mmu_for_each_pte(iter, mmu, fault->gfn, fault->gfn + 1) {
> + raw_gfn = gpa_to_gfn(fault->addr);
> +
> + if (is_error_noslot_pfn(fault->pfn) ||
The checking for noslot pfn is not required after this cleanup series [1] from
Sean, right?
> + !kvm_pfn_to_refcounted_page(fault->pfn)) {
What's the purpose of rejecting non-refcounted page (which is useful for trusted
IO)?
Besides, looks pages are allocated via kvm_faultin_pfn_private() if
fault->is_private, so where is the non-refcounted page from?
> + if (is_private) {
> + rcu_read_unlock();
> + return -EFAULT;
> + }
> + }
[1] https://lore.kernel.org/all/20240228024147.41573-1-seanjc@google.com/
> + tdp_mmu_for_each_pte(iter, mmu, is_private, raw_gfn, raw_gfn + 1) {
> int r;
>
> if (fault->nx_huge_page_workaround_enabled)
..
> diff --git a/virt/kvm/kvm_main.c b/virt/kvm/kvm_main.c
> index d399009ef1d7..e27c22449d85 100644
> --- a/virt/kvm/kvm_main.c
> +++ b/virt/kvm/kvm_main.c
> @@ -201,6 +201,7 @@ struct page *kvm_pfn_to_refcounted_page(kvm_pfn_t pfn)
>
> return NULL;
> }
> +EXPORT_SYMBOL_GPL(kvm_pfn_to_refcounted_page);
>
> /*
> * Switches to specified vcpu, until a matching vcpu_put()
> --
> 2.25.1
>
>
Powered by blists - more mailing lists