lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aM0rFlaVKRkNxQPS@google.com>
Date: Fri, 19 Sep 2025 11:06:14 +0100
From: Vincent Donnefort <vdonnefort@...gle.com>
To: Quentin Perret <qperret@...gle.com>
Cc: maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
	suzuki.poulose@....com, yuzenghui@...wei.com,
	catalin.marinas@....com, will@...nel.org, sebastianene@...gle.com,
	keirf@...gle.com, linux-arm-kernel@...ts.infradead.org,
	kvmarm@...ts.linux.dev, linux-kernel@...r.kernel.org,
	kernel-team@...roid.com
Subject: Re: [PATCH] KVM: arm64: Validate input range for pKVM mem transitions

On Fri, Sep 19, 2025 at 09:52:20AM +0000, Quentin Perret wrote:
> On Thursday 18 Sep 2025 at 19:00:49 (+0100), Vincent Donnefort wrote:
> > There's currently no verification for host issued ranges in most of the
> > pKVM memory transitions. The subsequent end boundary might therefore be
> > subject to overflow and could evade the later checks.
> > 
> > Close this loophole with an additional range_is_valid() check on a per
> > public function basis.
> > 
> > host_unshare_guest transition is already protected via
> > __check_host_shared_guest(), while assert_host_shared_guest() callers
> > are already ignoring host checks.
> > 
> > Signed-off-by: Vincent Donnefort <vdonnefort@...gle.com>
> > 
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index 8957734d6183..b156fb0bad0f 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -443,6 +443,11 @@ static bool range_is_memory(u64 start, u64 end)
> >  	return is_in_mem_range(end - 1, &r);
> >  }
> >  
> > +static bool range_is_valid(u64 start, u64 end)
> > +{
> > +	return start < end;
> > +}
> > +
> >  static inline int __host_stage2_idmap(u64 start, u64 end,
> >  				      enum kvm_pgtable_prot prot)
> >  {
> > @@ -776,6 +781,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
> >  	void *virt = __hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!range_is_valid(phys, phys + size))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -804,6 +812,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
> >  	u64 virt = (u64)__hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!range_is_valid(phys, phys + size))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -887,6 +898,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> 
> It occurred to me that this can also overflow, so perhaps fold that
> calculation into your helper as well to be on the safe?

I believe this is currently fine everywhere because nr_pages is solely used for
size computation. But happy to use nr_pages as a range_is_valid() argument
(instead of end) to verify size as well. That'll surely be more future-proof.

Let me respin that.

> 
> Thanks,
> Quentin
> 
> >  	int ret;
> >  
> > +	if (!range_is_valid(phys, phys + size))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
> >  	if (!ret)
> > @@ -902,6 +916,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> >  	int ret;
> >  
> > +	if (!range_is_valid(phys, phys + size))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
> >  	if (!ret)
> > @@ -949,6 +966,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
> >  	if (ret)
> >  		return ret;
> >  
> > +	if (!range_is_valid(phys, phys + size))
> > +		return -EINVAL;
> > +
> >  	ret = check_range_allowed_memory(phys, phys + size);
> >  	if (ret)
> >  		return ret;
> > 
> > base-commit: 8b789f2b7602a818e7c7488c74414fae21392b63
> > -- 
> > 2.51.0.470.ga7dc726c21-goog
> > 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ