[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <0cd3d811-1e05-4cdc-aaea-b45fddfc9e2d@redhat.com>
Date: Thu, 1 May 2025 12:59:03 +1000
From: Gavin Shan <gshan@...hat.com>
To: Steven Price <steven.price@....com>, kvm@...r.kernel.org,
kvmarm@...ts.linux.dev
Cc: Catalin Marinas <catalin.marinas@....com>, Marc Zyngier <maz@...nel.org>,
Will Deacon <will@...nel.org>, James Morse <james.morse@....com>,
Oliver Upton <oliver.upton@...ux.dev>,
Suzuki K Poulose <suzuki.poulose@....com>, Zenghui Yu
<yuzenghui@...wei.com>, linux-arm-kernel@...ts.infradead.org,
linux-kernel@...r.kernel.org, Joey Gouly <joey.gouly@....com>,
Alexandru Elisei <alexandru.elisei@....com>,
Christoffer Dall <christoffer.dall@....com>, Fuad Tabba <tabba@...gle.com>,
linux-coco@...ts.linux.dev,
Ganapatrao Kulkarni <gankulkarni@...amperecomputing.com>,
Shanker Donthineni <sdonthineni@...dia.com>, Alper Gun
<alpergun@...gle.com>, "Aneesh Kumar K . V" <aneesh.kumar@...nel.org>
Subject: Re: [PATCH v8 29/43] arm64: RME: Always use 4k pages for realms
On 4/16/25 11:41 PM, Steven Price wrote:
> Guest_memfd doesn't yet natively support huge pages, and there are
> currently difficulties for a VMM to manage huge pages efficiently so for
> now always split up mappings to PTE (4k).
>
> The two issues that need progressing before supporting huge pages for
> realms are:
>
> 1. guest_memfd needs to be able to allocate from an appropriate
> allocator which can provide huge pages.
>
> 2. The VMM needs to be able to repurpose private memory for a shared
> mapping when the guest VM requests memory is transitioned. Because
> this can happen at a 4k granularity it isn't possible to
> free/reallocate while huge pages are in use. Allowing the VMM to
> mmap() the shared portion of a huge page would allow the huge page
> to be recreated when the memory is unshared and made protected again.
>
> These two issues are not specific to realms and don't affect the realm
> API, so for now just break everything down to 4k pages in the RMM
> controlled stage 2. Future work can add huge page support without
> changing the uAPI.
>
> Signed-off-by: Steven Price <steven.price@....com>
> ---
> Changes since v7:
> * Rewritten commit message
> ---
> arch/arm64/kvm/mmu.c | 4 ++++
> 1 file changed, 4 insertions(+)
>
One nitpick below.
Reviewed-by: Gavin Shan <gshan@...hat.com>
> diff --git a/arch/arm64/kvm/mmu.c b/arch/arm64/kvm/mmu.c
> index 02b66ee35426..29bab7a46033 100644
> --- a/arch/arm64/kvm/mmu.c
> +++ b/arch/arm64/kvm/mmu.c
> @@ -1653,6 +1653,10 @@ static int user_mem_abort(struct kvm_vcpu *vcpu, phys_addr_t fault_ipa,
> if (logging_active || is_protected_kvm_enabled()) {
> force_pte = true;
> vma_shift = PAGE_SHIFT;
> + } else if (vcpu_is_rec(vcpu)) {
> + // Force PTE level mappings for realms
> + force_pte = true;
> + vma_shift = PAGE_SHIFT;
/* Force PTE level mappings for realms */
> } else {
> vma_shift = get_vma_page_shift(vma, hva);
> }
Thanks,
Gavin
Powered by blists - more mailing lists