[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <824f3a80ea74d1065ec5e2f8c123aa64e527f7f0.1659854957.git.isaku.yamahata@intel.com>
Date: Sun, 7 Aug 2022 15:18:38 -0700
From: isaku.yamahata@...el.com
To: kvm@...r.kernel.org, linux-kernel@...r.kernel.org
Cc: isaku.yamahata@...el.com, isaku.yamahata@...il.com,
Paolo Bonzini <pbonzini@...hat.com>, erdemaktas@...gle.com,
Sean Christopherson <seanjc@...gle.com>,
Sagi Shahar <sagis@...gle.com>
Subject: [RFC PATCH 05/13] KVM: TDX: Pass size to tdx_measure_page()
From: Xiaoyao Li <xiaoyao.li@...el.com>
Extend tdx_measure_page() to pass size info so that it can measure
large page as well.
Signed-off-by: Xiaoyao Li <xiaoyao.li@...el.com>
Signed-off-by: Isaku Yamahata <isaku.yamahata@...el.com>
---
arch/x86/kvm/vmx/tdx.c | 8 +++++---
1 file changed, 5 insertions(+), 3 deletions(-)
diff --git a/arch/x86/kvm/vmx/tdx.c b/arch/x86/kvm/vmx/tdx.c
index b717d50ee4d3..b7a75c0adbfa 100644
--- a/arch/x86/kvm/vmx/tdx.c
+++ b/arch/x86/kvm/vmx/tdx.c
@@ -1417,13 +1417,15 @@ void tdx_load_mmu_pgd(struct kvm_vcpu *vcpu, hpa_t root_hpa, int pgd_level)
td_vmcs_write64(to_tdx(vcpu), SHARED_EPT_POINTER, root_hpa & PAGE_MASK);
}
-static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa)
+static void tdx_measure_page(struct kvm_tdx *kvm_tdx, hpa_t gpa, int size)
{
struct tdx_module_output out;
u64 err;
int i;
- for (i = 0; i < PAGE_SIZE; i += TDX_EXTENDMR_CHUNKSIZE) {
+ WARN_ON_ONCE(size % TDX_EXTENDMR_CHUNKSIZE);
+
+ for (i = 0; i < size; i += TDX_EXTENDMR_CHUNKSIZE) {
err = tdh_mr_extend(kvm_tdx->tdr.pa, gpa + i, &out);
if (KVM_BUG_ON(err, &kvm_tdx->kvm)) {
pr_tdx_error(TDH_MR_EXTEND, err, &out);
@@ -1497,7 +1499,7 @@ static void __tdx_sept_set_private_spte(struct kvm *kvm, gfn_t gfn,
pr_tdx_error(TDH_MEM_PAGE_ADD, err, &out);
tdx_unpin_pfn(kvm, pfn);
} else if ((kvm_tdx->source_pa & KVM_TDX_MEASURE_MEMORY_REGION))
- tdx_measure_page(kvm_tdx, gpa); /* TODO: handle page size > 4KB */
+ tdx_measure_page(kvm_tdx, gpa, KVM_HPAGE_SIZE(level));
kvm_tdx->source_pa = INVALID_PAGE;
}
--
2.25.1
Powered by blists - more mailing lists