[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <26a3508e4f57f6104abecd90192f12375fe04ecc.camel@intel.com>
Date: Wed, 16 Nov 2022 11:58:46 +0000
From: "Huang, Kai" <kai.huang@...el.com>
To: "kvm@...r.kernel.org" <kvm@...r.kernel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"Yamahata, Isaku" <isaku.yamahata@...el.com>
CC: "pbonzini@...hat.com" <pbonzini@...hat.com>,
"Shahar, Sagi" <sagis@...gle.com>,
"Aktas, Erdem" <erdemaktas@...gle.com>,
"isaku.yamahata@...il.com" <isaku.yamahata@...il.com>,
"dmatlack@...gle.com" <dmatlack@...gle.com>,
"Christopherson,, Sean" <seanjc@...gle.com>
Subject: Re: [PATCH v10 049/108] KVM: x86/tdp_mmu: Support TDX private mapping
for TDP MMU
>
> +static inline int kvm_alloc_private_spt_for_split(struct kvm_mmu_page *sp, gfp_t gfp)
> +{
> + gfp &= ~__GFP_ZERO;
> + sp->private_spt = (void *)__get_free_page(gfp);
> + if (!sp->private_spt)
> + return -ENOMEM;
> + return 0;
> +}
> +
>
[...]
> @@ -1238,6 +1408,12 @@ int kvm_tdp_mmu_map(struct kvm_vcpu *vcpu, struct kvm_page_fault *fault)
> is_large_pte(iter.old_spte)) {
> if (tdp_mmu_zap_spte_atomic(vcpu->kvm, &iter))
> break;
> + /*
> + * TODO: large page support.
> + * Doesn't support large page for TDX now
> + */
> + KVM_BUG_ON(is_private_sptep(iter.sptep), vcpu->kvm);
> +
>
So large page is not supported for private page, ...
> /*
> * The iter must explicitly re-read the spte here
> @@ -1480,6 +1656,12 @@ static struct kvm_mmu_page *__tdp_mmu_alloc_sp_for_split(gfp_t gfp, union kvm_mm
>
> sp->role = role;
> sp->spt = (void *)__get_free_page(gfp);
> + if (kvm_mmu_page_role_is_private(role)) {
> + if (kvm_alloc_private_spt_for_split(sp, gfp)) {
> + free_page((unsigned long)sp->spt);
> + sp->spt = NULL;
> + }
> + }
... then I don't think eager splitting could happen for private mapping?
If so, should we just KVM_BUG_ON() if role is private?
Powered by blists - more mailing lists