[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <20240821153844.60084-31-steven.price@arm.com>
Date: Wed, 21 Aug 2024 16:38:31 +0100
From: Steven Price <steven.price@....com>
To: kvm@...r.kernel.org,
kvmarm@...ts.linux.dev
Cc: Steven Price <steven.price@....com>,
Catalin Marinas <catalin.marinas@....com>,
Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>,
James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Suzuki K Poulose <suzuki.poulose@....com>,
Zenghui Yu <yuzenghui@...wei.com>,
linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org,
Joey Gouly <joey.gouly@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Christoffer Dall <christoffer.dall@....com>,
Fuad Tabba <tabba@...gle.com>,
linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
Gavin Shan <gshan@...hat.com>,
Shanker Donthineni <sdonthineni@...dia.com>,
Alper Gun <alpergun@...gle.com>
Subject: [PATCH v4 30/43] arm64: RME: Always use 4k pages for realms
Always split up huge pages to avoid problems managing huge pages. There
are two issues currently:
1. The uABI for the VMM allows populating memory on 4k boundaries even
if the underlying allocator (e.g. hugetlbfs) is using a larger page
size. Using a memfd for private allocations will push this issue onto
the VMM as it will need to respect the granularity of the allocator.
2. The guest is able to request arbitrary ranges to be remapped as
shared. Again with a memfd approach it will be up to the VMM to deal
with the complexity and either overmap (need the huge mapping and add
an additional 'overlapping' shared mapping) or reject the request as
invalid due to the use of a huge page allocator.
For now just break everything down to 4k pages in the RMM controlled
stage 2.
Signed-off-by: Steven Price <steven.price@....com>
---
arch/arm64/kvm/mmu.c | 4 ++++
1 file changed, 4 insertions(+)
diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
index eb8b8d013f3e..dfc3dd628e43 100644
--- a/arch/arm64/kvm/mmu.c
+++ b/arch/arm64/kvm/mmu.c
@@ -1580,6 +1580,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
if (logging_active) {
force_pte = true;
vma_shift = PAGE_SHIFT;
+ } else if (kvm_is_realm(kvm)) {
+ // Force PTE level mappings for realms
+ force_pte = true;
+ vma_shift = PAGE_SHIFT;
} else {
vma_shift = get_vma_page_shift(vma, hva);
}
--
2.34.1
Powered by blists - more mailing lists