[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <868qmwg0r5.wl-maz@kernel.org>
Date: Fri, 16 May 2025 14:10:06 +0100
From: Marc Zyngier <maz@...nel.org>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: oliver.upton@...ux.dev,
joey.gouly@....com,
suzuki.poulose@....com,
yuzenghui@...wei.com,
catalin.marinas@....com,
will@...nel.org,
qperret@...gle.com,
linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev,
linux-kernel@...r.kernel.org,
kernel-team@...roid.com
Subject: Re: [PATCH v4 03/10] KVM: arm64: Add a range to __pkvm_host_share_guest()
On Fri, 09 May 2025 14:16:59 +0100,
Vincent Donnefort <vdonnefort@...gle.com> wrote:
>
> In preparation for supporting stage-2 huge mappings for np-guest. Add a
> nr_pages argument to the __pkvm_host_share_guest hypercall. This range
> supports only two values: 1 or PMD_SIZE / PAGE_SIZE (that is 512 on a
> 4K-pages system).
>
> Signed-off-by: Vincent Donnefort <vdonnefort@...gle.com>
>
> diff --git a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> index 26016eb9323f..47aa7b01114f 100644
> --- a/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> +++ b/arch/arm64/kvm/hyp/include/nvhe/mem_protect.h
> @@ -39,7 +39,7 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages);
> int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages);
> int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages);
> int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages);
> -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu,
> +int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu *vcpu,
> enum kvm_pgtable_prot prot);
> int __pkvm_host_unshare_guest(u64 gfn, struct pkvm_hyp_vm *hyp_vm);
> int __pkvm_host_relax_perms_guest(u64 gfn, struct pkvm_hyp_vcpu *vcpu, enum kvm_pgtable_prot prot);
> diff --git a/arch/arm64/kvm/hyp/nvhe/hyp-main.c b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> index 59db9606e6e1..4d3d215955c3 100644
> --- a/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> +++ b/arch/arm64/kvm/hyp/nvhe/hyp-main.c
> @@ -245,7 +245,8 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt)
> {
> DECLARE_REG(u64, pfn, host_ctxt, 1);
> DECLARE_REG(u64, gfn, host_ctxt, 2);
> - DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 3);
> + DECLARE_REG(u64, nr_pages, host_ctxt, 3);
> + DECLARE_REG(enum kvm_pgtable_prot, prot, host_ctxt, 4);
> struct pkvm_hyp_vcpu *hyp_vcpu;
> int ret = -EINVAL;
>
> @@ -260,7 +261,7 @@ static void handle___pkvm_host_share_guest(struct kvm_cpu_context *host_ctxt)
> if (ret)
> goto out;
>
> - ret = __pkvm_host_share_guest(pfn, gfn, hyp_vcpu, prot);
> + ret = __pkvm_host_share_guest(pfn, gfn, nr_pages, hyp_vcpu, prot);
> out:
> cpu_reg(host_ctxt, 1) = ret;
> }
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 4d269210dae0..f0f7c6f83e57 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -696,10 +696,9 @@ static enum pkvm_page_state guest_get_page_state(kvm_pte_t pte, u64 addr)
> return pkvm_getstate(kvm_pgtable_stage2_pte_prot(pte));
> }
>
> -static int __guest_check_page_state_range(struct pkvm_hyp_vcpu *vcpu, u64 addr,
> +static int __guest_check_page_state_range(struct pkvm_hyp_vm *vm, u64 addr,
> u64 size, enum pkvm_page_state state)
> {
> - struct pkvm_hyp_vm *vm = pkvm_hyp_vcpu_to_hyp_vm(vcpu);
> struct check_walk_data d = {
> .desired = state,
> .get_page_state = guest_get_page_state,
> @@ -908,48 +907,81 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> return ret;
> }
>
> -int __pkvm_host_share_guest(u64 pfn, u64 gfn, struct pkvm_hyp_vcpu *vcpu,
> +static int __guest_check_transition_size(u64 phys, u64 ipa, u64 nr_pages, u64 *size)
> +{
> + if (nr_pages == 1) {
> + *size = PAGE_SIZE;
> + return 0;
> + }
> +
> + /* We solely support PMD_SIZE huge-pages */
> + if (nr_pages != (1 << (PMD_SHIFT - PAGE_SHIFT)))
> + return -EINVAL;
I'm not really keen on the whole PxD nomenclature. What we really care
about is a mapping level (level 2 in this instance). Can we instead
use kvm_granule_size()? Something like:
if ((nr_page * PAGE_SIZE) != kvm_granule_size(2))
return -EINVAL;
> +
> + if (!IS_ALIGNED(phys | ipa, PMD_SIZE))
> + return -EINVAL;
> +
> + *size = PMD_SIZE;
Similar things here. But also, should this level-2 block checking be
moved to the patch that actually allows block mapping?
Thanks,
M.
--
Without deviation from the norm, progress is not possible.
Powered by blists - more mailing lists