[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240905124107.6954-8-pratikrajesh.sampat@amd.com>
Date: Thu, 5 Sep 2024 07:41:05 -0500
From: "Pratik R. Sampat" <pratikrajesh.sampat@....com>
To: <kvm@...r.kernel.org>
CC: <seanjc@...gle.com>, <pbonzini@...hat.com>, <pgonda@...gle.com>,
<thomas.lendacky@....com>, <michael.roth@....com>, <shuah@...nel.org>,
<linux-kselftest@...r.kernel.org>, <linux-kernel@...r.kernel.org>
Subject: [PATCH v3 7/9] KVM: selftests: Add interface to manually flag protected/encrypted ranges
From: Michael Roth <michael.roth@....com>
For SEV and SNP, currently __vm_phy_pages_alloc() handles setting the
region->protected_phy_pages bitmap to mark that the region needs to be
encrypted/measured into the initial guest state prior to
finalizing/starting the guest. It also marks what GPAs need to be mapped
as encrypted in the initial guest page table.
This works when using virtual/physical allocators to manage memory, but
if the test manages allocations/mapping directly then an alternative is
needed to set region->protected_phy_pages directly. Add an interface to
handle that.
Signed-off-by: Michael Roth <michael.roth@....com>
Signed-off-by: Pratik R. Sampat <pratikrajesh.sampat@....com>
Tested-by: Peter Gonda <pgonda@...gle.com>
Tested-by: Srikanth Aithal <sraithal@....com>
---
.../testing/selftests/kvm/include/kvm_util.h | 2 +
tools/testing/selftests/kvm/lib/kvm_util.c | 45 +++++++++++++++++--
2 files changed, 43 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/kvm/include/kvm_util.h b/tools/testing/selftests/kvm/include/kvm_util.h
index ab213708b551..642740fe1c59 100644
--- a/tools/testing/selftests/kvm/include/kvm_util.h
+++ b/tools/testing/selftests/kvm/include/kvm_util.h
@@ -394,6 +394,8 @@ static inline void vm_set_memory_attributes(struct kvm_vm *vm, uint64_t gpa,
vm_ioctl(vm, KVM_SET_MEMORY_ATTRIBUTES, &attr);
}
+void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
+ vm_paddr_t paddr, size_t num);
static inline void vm_mem_set_private(struct kvm_vm *vm, uint64_t gpa,
uint64_t size)
diff --git a/tools/testing/selftests/kvm/lib/kvm_util.c b/tools/testing/selftests/kvm/lib/kvm_util.c
index bbf90ad224da..d44a37aebcec 100644
--- a/tools/testing/selftests/kvm/lib/kvm_util.c
+++ b/tools/testing/selftests/kvm/lib/kvm_util.c
@@ -1991,6 +1991,43 @@ const char *exit_reason_str(unsigned int exit_reason)
return "Unknown";
}
+/*
+ * Set what guest GFNs need to be encrypted prior to finalizing a CoCo VM.
+ *
+ * Input Args:
+ * vm - Virtual Machine
+ * memslot - Memory region to allocate page from
+ * paddr - Start of physical address to mark as encrypted
+ * num - number of pages
+ *
+ * Output Args: None
+ *
+ * Return: None
+ *
+ * Generally __vm_phy_pages_alloc() will handle this automatically, but
+ * for cases where the test handles managing the physical allocation and
+ * mapping directly this interface should be used to mark physical pages
+ * that are intended to be encrypted as part of the initial guest state.
+ * This will also affect whether virt_map()/virt_pg_map() will map the
+ * page as encrypted or not in the initial guest page table.
+ *
+ * If the initial guest state has already been finalized, then setting
+ * it as encrypted will essentially be a noop since nothing more can be
+ * encrypted into the initial guest state at that point.
+ */
+void vm_mem_set_protected(struct kvm_vm *vm, uint32_t memslot,
+ vm_paddr_t paddr, size_t num)
+{
+ struct userspace_mem_region *region;
+ sparsebit_idx_t pg, base;
+
+ base = paddr >> vm->page_shift;
+ region = memslot2region(vm, memslot);
+
+ for (pg = base; pg < base + num; ++pg)
+ sparsebit_set(region->protected_phy_pages, pg);
+}
+
/*
* Physical Contiguous Page Allocator
*
@@ -2048,11 +2085,11 @@ vm_paddr_t __vm_phy_pages_alloc(struct kvm_vm *vm, size_t num,
abort();
}
- for (pg = base; pg < base + num; ++pg) {
+ for (pg = base; pg < base + num; ++pg)
sparsebit_clear(region->unused_phy_pages, pg);
- if (protected)
- sparsebit_set(region->protected_phy_pages, pg);
- }
+
+ if (protected)
+ vm_mem_set_protected(vm, memslot, base << vm->page_shift, num);
return base * vm->page_size;
}
--
2.34.1
Powered by blists - more mailing lists