lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <aL7AoPKKKAR8285O@arm.com>
Date: Mon, 8 Sep 2025 12:40:16 +0100
From: Catalin Marinas <catalin.marinas@....com>
To: "Aneesh Kumar K.V" <aneesh.kumar@...nel.org>
Cc: linux-kernel@...r.kernel.org, iommu@...ts.linux.dev,
	linux-coco@...ts.linux.dev, will@...nel.org, maz@...nel.org,
	tglx@...utronix.de, robin.murphy@....com, suzuki.poulose@....com,
	akpm@...ux-foundation.org, jgg@...pe.ca, steven.price@....com
Subject: Re: [RFC PATCH] arm64: swiotlb: dma: its: Ensure shared buffers are
 properly aligned

On Mon, Sep 08, 2025 at 03:07:00PM +0530, Aneesh Kumar K.V wrote:
> Catalin Marinas <catalin.marinas@....com> writes:
> > On Fri, Sep 05, 2025 at 11:24:41AM +0530, Aneesh Kumar K.V (Arm) wrote:
> >> When running with private memory guests, the guest kernel must allocate
> >> memory with specific constraints when sharing it with the hypervisor.
> >> 
> >> These shared memory buffers are also accessed by the host kernel, which
> >> means they must be aligned to the host kernel's page size.
> >
> > So this is the case where the guest page size is smaller than the host
> > one. Just trying to understand what would go wrong if we don't do
> > anything here. Let's say the guest uses 4K pages and the host a 64K
> > pages. Within a 64K range, only a 4K is shared/decrypted. If the host
> > does not explicitly access the other 60K around the shared 4K, can
> > anything still go wrong? Is the hardware ok with speculative loads from
> > non-shared ranges?
> 
> With features like guest_memfd, the goal is to explicitly prevent the
> host from mapping private memory, rather than relying on the host to
> avoid accessing those regions.

Yes, if all the memory is private. At some point the guest will start
sharing memory with the host. In theory, the host could map more than it
was given access to as long as it doesn't touch the area around the
shared range. Not ideal and it may not match the current guest_memfd API
but I'd like to understand all the options we have.

> As per Arm ARM:
> RVJLXG: Accesses are checked against the GPC configuration for the
> physical granule being accessed, regardless of the stage 1 and stage 2
> translation configuration.

OK, so this rule doesn't say anything about the granule size at stage 1
or stage 2. The check is purely done based on the PGS field
configuration. The need for the host granule size to match PGS is just a
software construct.

> For example, if GPCCR_EL3.PGS is configured to a smaller granule size
> than the configured stage 1 and stage 2 translation granule size,
> accesses are checked at the GPCCR_EL3.PGS granule size.

I assume GPCCR_EL3.PGS is pre-configured on the system as 4K and part of
the RMM spec.

-- 
Catalin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ