[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <3103faef-1f02-47f9-b1ca-ec6af200773f@arm.com>
Date: Tue, 20 May 2025 15:59:57 +0100
From: Suzuki K Poulose <suzuki.poulose@....com>
To: Steven Price <steven.price@....com>, kvm@...r.kernel.org,
kvmarm@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>, Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>, James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>, Zenghui Yu <yuzenghui@...wei.com>,
linux-arm-kernel@...ts.infradead.org, linux-kernel@...r.kernel.org,
Joey Gouly <joey.gouly@....com>, Alexandru Elisei
<alexandru.elisei@....com>, Christoffer Dall <christoffer.dall@....com>,
Fuad Tabba <tabba@...gle.com>, linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
Gavin Shan <gshan@...hat.com>, Shanker Donthineni <sdonthineni@...dia.com>,
Alper Gun <alpergun@...gle.com>, "Aneesh Kumar K . V"
<aneesh.kumar@...nel.org>
Subject: Re: [PATCH v8 29/43] arm64: RME: Always use 4k pages for realms
On 16/04/2025 14:41, Steven Price wrote:
> Guest_memfd doesn't yet natively support huge pages, and there are
> currently difficulties for a VMM to manage huge pages efficiently so for
> now always split up mappings to PTE (4k).
>
> The two issues that need progressing before supporting huge pages for
> realms are:
>
> 1. guest_memfd needs to be able to allocate from an appropriate
> allocator which can provide huge pages.
>
> 2. The VMM needs to be able to repurpose private memory for a shared
> mapping when the guest VM requests memory is transitioned. Because
> this can happen at a 4k granularity it isn't possible to
> free/reallocate while huge pages are in use. Allowing the VMM to
> mmap() the shared portion of a huge page would allow the huge page
> to be recreated when the memory is unshared and made protected again.
>
> These two issues are not specific to realms and don't affect the realm
> API, so for now just break everything down to 4k pages in the RMM
> controlled stage 2. Future work can add huge page support without
> changing the uAPI.
>
> Signed-off-by: Steven Price <steven.price@....com>
With comments from Gavin addressed,
Reviewed-by: Suzuki K Poulose <suzuki.poulose@....com>
> ---
> Changes since v7:
> * Rewritten commit message
> ---
> arch/arm64/kvm/mmu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 02b66ee35426..29bab7a46033 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1653,6 +1653,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (logging_active || is_protected_kvm_enabled()) {
> force_pte = true;
> vma_shift = PAGE_SHIFT;
> + } else if (vcpu_is_rec(vcpu)) {
> + // Force PTE level mappings for realms
> + force_pte = true;
> + vma_shift = PAGE_SHIFT;
> } else {
> vma_shift = get_vma_page_shift(vma, hva);
> }
Powered by blists - more mailing lists