lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aQOKLaCKUDcqIZeM@google.com>
Date: Thu, 30 Oct 2025 15:54:21 +0000
From: Vincent Donnefort <vdonnefort@...gle.com>
To: Sebastian Ene <sebastianene@...gle.com>
Cc: maz@...nel.org, oliver.upton@...ux.dev, joey.gouly@....com,
	suzuki.poulose@....com, yuzenghui@...wei.com,
	catalin.marinas@....com, will@...nel.org, qperret@...gle.com,
	keirf@...gle.com, linux-arm-kernel@...ts.infradead.org,
	kvmarm@...ts.linux.dev, linux-kernel@...r.kernel.org,
	kernel-team@...roid.com
Subject: Re: [PATCH v3] KVM: arm64: Check range args for pKVM mem transitions

On Thu, Oct 30, 2025 at 06:09:31AM +0000, Sebastian Ene wrote:
> On Thu, Oct 16, 2025 at 05:45:41PM +0100, Vincent Donnefort wrote:
> > There's currently no verification for host issued ranges in most of the
> > pKVM memory transitions. The end boundary might therefore be subject to
> > overflow and later checks could be evaded.
> > 
> > Close this loophole with an additional pfn_range_is_valid() check on a
> > per public function basis. Once this check has passed, it is safe to
> > convert pfn and nr_pages into a phys_addr_t and a size.
> > 
> > host_unshare_guest transition is already protected via
> > __check_host_shared_guest(), while assert_host_shared_guest() callers
> > are already ignoring host checks.
> > 
> > Signed-off-by: Vincent Donnefort <vdonnefort@...gle.com>
> > 
> > ---
> > 
> > v2 -> v3: 
> >    * Test range against PA-range and make the func phys specific.
> > 
> > v1 -> v2:
> >    * Also check for (nr_pages * PAGE_SIZE) overflow. (Quentin)
> >    * Rename to check_range_args().
> > 
> > diff --git a/arch/arm64/kvm/hyp/nvhe/mem_protect.c b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > index ddc8beb55eee..49db32f3ddf7 100644
> > --- a/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > +++ b/arch/arm64/kvm/hyp/nvhe/mem_protect.c
> > @@ -367,6 +367,19 @@ static int host_stage2_unmap_dev_all(void)
> >  	return kvm_pgtable_stage2_unmap(pgt, addr, BIT(pgt->ia_bits) - addr);
> >  }
> 
> Hello Vincent,
> 
> >  
> > +/*
> > + * Ensure the PFN range is contained within PA-range.
> > + *
> > + * This check is also robust to overflows and is therefore a requirement before
> > + * using a pfn/nr_pages pair from an untrusted source.
> > + */
> > +static bool pfn_range_is_valid(u64 pfn, u64 nr_pages)
> > +{
> > +	u64 limit = BIT(kvm_phys_shift(&host_mmu.arch.mmu) - PAGE_SHIFT);
> > +
> > +	return pfn < limit && ((limit - pfn) >= nr_pages);
> > +}
> > +
> 
> This newly introduced function is probably fine to be called without the host lock held as long
> as no one modifies the vtcr field from the host.mmu structure. While
> searching I couldn't find a place where this is directly modified so
> this is probably fine. 
> 
> >  struct kvm_mem_range {
> >  	u64 start;
> >  	u64 end;
> > @@ -776,6 +789,9 @@ int __pkvm_host_donate_hyp(u64 pfn, u64 nr_pages)
> >  	void *virt = __hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -804,6 +820,9 @@ int __pkvm_hyp_donate_host(u64 pfn, u64 nr_pages)
> >  	u64 virt = (u64)__hyp_va(phys);
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	hyp_lock_component();
> >  
> > @@ -887,6 +906,9 @@ int __pkvm_host_share_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_OWNED);
> >  	if (!ret)
> > @@ -902,6 +924,9 @@ int __pkvm_host_unshare_ffa(u64 pfn, u64 nr_pages)
> >  	u64 size = PAGE_SIZE * nr_pages;
> >  	int ret;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> >  	host_lock_component();
> >  	ret = __host_check_page_state_range(phys, size, PKVM_PAGE_SHARED_OWNED);
> >  	if (!ret)
> > @@ -945,6 +970,9 @@ int __pkvm_host_share_guest(u64 pfn, u64 gfn, u64 nr_pages, struct pkvm_hyp_vcpu
> >  	if (prot & ~KVM_PGTABLE_PROT_RWX)
> >  		return -EINVAL;
> >  
> > +	if (!pfn_range_is_valid(pfn, nr_pages))
> > +		return -EINVAL;
> > +
> 
> I think we don't need it here because __pkvm_host_share_guest has the
> __guest_check_transition_size verification in place which limits
> nr_pages.  

__guest_check_transition size will only limit to PMD_SIZE, which can be quite a
big number if you consider > 4KiB pages systems. So I believe this is still a loophole
worth fixing.

> 
> >  	ret = __guest_check_transition_size(phys, ipa, nr_pages, &size);
> >  	if (ret)
> >  		return ret;
> > 
> > base-commit: 7ea30958b3054f5e488fa0b33c352723f7ab3a2a
> > -- 
> > 2.51.0.869.ge66316f041-goog
> >
> 
> Other than that this looks good, thanks
> Sebastian

Thanks for having a look at the patch.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ