lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aL8RdvuDbtbUDk2D@arm.com>
Date: Mon, 8 Sep 2025 18:25:10 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Steven Price <steven.price@....com>
Cc: Jason Gunthorpe <jgg@...pe.ca>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-coco@...ts.linux.dev, will@...nel.org, maz@...nel.org,
	tglx@...utronix.de, robin.murphy@....com, akpm@...ux-foundation.org
Subject: Re: [RFC PATCH] arm64: swiotlb: dma: its: Ensure shared buffers are
 properly aligned

On Mon, Sep 08, 2025 at 04:39:13PM +0100, Steven Price wrote:
> On 08/09/2025 15:58, Jason Gunthorpe wrote:
> > If ARM has proper faulting then you don't have an issue mapping 64K
> > into a userspace and just segfaulting the VMM if it does something
> > wrong.
> 
> ...the VMM can cause problems. If the VMM touches the memory itself then
> things are simple - we can detect that the fault was from user space and
> trigger a SIGBUS to kill of the VMM.

Similarly for uaccess.

> But the VMM can also attempt to pass the address into the kernel and
> cause the kernel to do a get_user_pages() call (and this is something we
> want to support for shared memory). The problem is if the kernel then
> touches the parts of the page which are protected we get a fault with no
> (easy) way to relate back to the VMM.

I assume the host has a mechanism to check that the memory has been
marked as shared by the guest and the guest cannot claim it back as
private while the host is accessing it (I should dig out the CCA spec).

> guest_memfd provided a nice way around this - a dedicated allocator
> which doesn't allow mmap(). This meant we don't need to worry about user
> space handing protected memory into the kernel. It's now getting
> extended to support mmap() but only when shared, and there was a lot of
> discussion about how to ensure that there are no mmap regions when
> converting memory back to private.

That's indeed problematic and we don't have a simple way to check that
a user VMM address won't fault when accessed via the linear map. The
vma checks we get with mmap are (host) page size based.

Can we instead only allow mismatched (or smaller) granule sizes in the
guest if the VMM doesn't use the mmap() interface? It's not like
trapping TCR_EL1 but simply rejecting such unaligned memory slots since
the host will need to check that the memory has indeed been shared. KVM
can advertise higher granules only, though the guest can ignore them.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ