lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <aMQ1YeoB2PsO2e17@arm.com>
Date: Fri, 12 Sep 2025 15:59:45 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: Steven Price <steven.price@....com>
Cc: Jason Gunthorpe <jgg@...pe.ca>,
	Suzuki K Poulose <suzuki.poulose@....com>,
	"Aneesh Kumar K.V" <aneesh.kumar@...nel.org>,
	linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-coco@...ts.linux.dev, will@...nel.org, maz@...nel.org,
	tglx@...utronix.de, robin.murphy@....com, akpm@...ux-foundation.org
Subject: Re: [RFC PATCH] arm64: swiotlb: dma: its: Ensure shared buffers are
 properly aligned

On Wed, Sep 10, 2025 at 11:08:19AM +0100, Steven Price wrote:
> On 08/09/2025 18:25, Catalin Marinas wrote:
> > On Mon, Sep 08, 2025 at 04:39:13PM +0100, Steven Price wrote:
> >> On 08/09/2025 15:58, Jason Gunthorpe wrote:
> >>> If ARM has proper faulting then you don't have an issue mapping 64K
> >>> into a userspace and just segfaulting the VMM if it does something
> >>> wrong.
> >>
> >> ...the VMM can cause problems. If the VMM touches the memory itself then
> >> things are simple - we can detect that the fault was from user space and
> >> trigger a SIGBUS to kill of the VMM.
> > 
> > Similarly for uaccess.
> > 
> >> But the VMM can also attempt to pass the address into the kernel and
> >> cause the kernel to do a get_user_pages() call (and this is something we
> >> want to support for shared memory). The problem is if the kernel then
> >> touches the parts of the page which are protected we get a fault with no
> >> (easy) way to relate back to the VMM.
> > 
> > I assume the host has a mechanism to check that the memory has been
> > marked as shared by the guest and the guest cannot claim it back as
> > private while the host is accessing it (I should dig out the CCA spec).
> > 
> >> guest_memfd provided a nice way around this - a dedicated allocator
> >> which doesn't allow mmap(). This meant we don't need to worry about user
> >> space handing protected memory into the kernel. It's now getting
> >> extended to support mmap() but only when shared, and there was a lot of
> >> discussion about how to ensure that there are no mmap regions when
> >> converting memory back to private.
> > 
> > That's indeed problematic and we don't have a simple way to check that
> > a user VMM address won't fault when accessed via the linear map. The
> > vma checks we get with mmap are (host) page size based.
> > 
> > Can we instead only allow mismatched (or smaller) granule sizes in the
> > guest if the VMM doesn't use the mmap() interface? It's not like
> > trapping TCR_EL1 but simply rejecting such unaligned memory slots since
> > the host will need to check that the memory has indeed been shared. KVM
> > can advertise higher granules only, though the guest can ignore them.
> > 
> 
> Yes, mismatched granules sizes could be supported if we disallowed
> mmap(). This is assuming the RMM supports the required size - which is
> currently true, but the intention is to optimise the S2 in the future by
> matching the host page size.
> 
> But I'm not sure how useful that would be. The VMMs of today don't
> expect to have to perform read()/write() calls to access the guest's
> memory, so any user space emulation would need to also be updated to
> deal with this restriction.
> 
> But that seems like a lot of effort to support something that doesn't
> seem to have a use case. Whereas there's an obvious use case for the
> guest and VMM sharing one (or often more) pages of (mapped) memory. The
> part that CCA makes this tricky is that we need to pick the VMM's page
> size rather than the guest's.

Given that the vmas in Linux are page-aligned, it's too intrusive to
support sub-page granularity in the host (if at all possible). So, based
on the discussion here, we do need the guest to play along and share
mappings with the granularity of the host page size. Of course, one way
is to mandate that the guest uses the same page size as the host.

The original patch needs some more changes mentioned in this thread. It
is missing places where we have set_memory_decrypted() but the size is
not guaranteed to be aligned. I would also replace the
arch_shared_mem_alignment() name with something that resembles the
mem-encrypt API (e.g. mem_encrypt_align(size) for lack of inspiration;
the default would return 'size' so there's no change for other
architectures). Using 'shared' is confusing since the notion of sharing
is not limited to confidential compute.

It does feel like this could be handled at a higher level (e.g. the
virtio code or specific device drivers doing DMA) but it won't be
generic enough. Bouncing of decrypted DMA via swiotlb is already
generic.

BTW, with device assignment, we need a second, encrypted swiotlb as it's
used for bouncing small buffers. Unless we mandate that all devices
assigned to realms are fully coherent.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ