[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <cf3v4dn233bf6y74ythiqulwfnshcdmddsdx3iqcenqjos5cct@zkcw7g5ieei7>
Date: Fri, 19 Sep 2025 09:52:20 +0000
From: Quentin Perret <qperret@...gle.com>
To: Vincent Donnefort <vdonnefort@...gle.com>
Cc: maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
suzuki.poulose@....com, yuzenghui@...wei.com, catalin.marinas@....com, will@...nel.org,
sebastianene@...gle.com, keirf@...gle.com, linux-arm-kernel@...ts.infradead.org,
kvmarm@...ts.linux.dev, linux-kernel@...r.kernel.org, kernel-team@...roid.com
Subject: Re: [PATCH] KVM: arm64: Validate input range for pKVM mem transitions
On Thursday 18 Sep 2025 at 19:00:49 (+0100), Vincent Donnefort wrote:
> There's currently no verification for host issued ranges in most of the
> pKVM memory transitions. The subsequent end boundary might therefore be
> subject to overflow and could evade the later checks.
>
> Close this loophole with an additional range_is_valid() check on a per
> public function basis.
>
> host_unshare_guest transition is already protected via
> __check_host_shared_guest(), while assert_host_shared_guest() callers
> are already ignoring host checks.
>
> Signed-off-by: Vincent Donnefort <vdonnefort@...gle.com>
>
> diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> index 8957734d6183..b156fb0bad0f 100644
> --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> @@ -443,6 +443,11 @@ static bool range_is_memory(u64 start, u64 end)
> return is_in_mem_range(end - 1, &r);
> }
>
> +static bool range_is_valid(u64 start, u64 end)
> +{
> + return start < end;
> +}
> +
> static inline int __host_stage2_idmap(u64 start, u64 end,
> enum kvm_pgtable_prot prot)
> {
> @@ -776,6 +781,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
> void *virt = __hyp_va(phys);
> int ret;
>
> + if (!range_is_valid(phys, phys + size))
> + return -EINVAL;
> +
> host_lock_component();
> hyp_lock_component();
>
> @@ -804,6 +812,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
> u64 virt = (u64)__hyp_va(phys);
> int ret;
>
> + if (!range_is_valid(phys, phys + size))
> + return -EINVAL;
> +
> host_lock_component();
> hyp_lock_component();
>
> @@ -887,6 +898,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
> u64 size = PAGE_SIZE * nr_pages;
It occurred to me that this can also overflow, so perhaps fold that
calculation into your helper as well to be on the safe?
Thanks,
Quentin
> int ret;
>
> + if (!range_is_valid(phys, phys + size))
> + return -EINVAL;
> +
> host_lock_component();
> ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
> if (!ret)
> @@ -902,6 +916,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> u64 size = PAGE_SIZE * nr_pages;
> int ret;
>
> + if (!range_is_valid(phys, phys + size))
> + return -EINVAL;
> +
> host_lock_component();
> ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
> if (!ret)
> @@ -949,6 +966,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
> if (ret)
> return ret;
>
> + if (!range_is_valid(phys, phys + size))
> + return -EINVAL;
> +
> ret = check_range_allowed_memory(phys, phys + size);
> if (ret)
> return ret;
>
> base-commit: 8b789f2b7602a818e7c7488c74414fae21392b63
> --
> 2.51.0.470.ga7dc726c21-goog
>
Powered by blists - more mailing lists