lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID:
 <SN6PR02MB41571B5C2C9C59B0DF5F4E7ED4FB2@SN6PR02MB4157.namprd02.prod.outlook.com>
Date: Fri, 7 Jun 2024 16:36:18 +0000
From: Michael Kelley <mhklinux@...look.com>
To: Catalin Marinas <catalin.marinas@....com>
CC: Steven Price <steven.price@....com>, "kvm@...r.kernel.org"
	<kvm@...r.kernel.org>, "kvmarm@...ts.linux.dev" <kvmarm@...ts.linux.dev>,
	Marc Zyngier <maz@...nel.org>, Will Deacon <will@...nel.org>, James Morse
	<james.morse@....com>, Oliver Upton <oliver.upton@...ux.dev>, Suzuki K
 Poulose <suzuki.poulose@....com>, Zenghui Yu <yuzenghui@...wei.com>,
	"linux-arm-kernel@...ts.infradead.org"
	<linux-arm-kernel@...ts.infradead.org>, "linux-kernel@...r.kernel.org"
	<linux-kernel@...r.kernel.org>, Joey Gouly <joey.gouly@....com>, Alexandru
 Elisei <alexandru.elisei@....com>, Christoffer Dall
	<christoffer.dall@....com>, Fuad Tabba <tabba@...gle.com>,
	"linux-coco@...ts.linux.dev" <linux-coco@...ts.linux.dev>, Ganapatrao
 Kulkarni <gankulkarni@...amperecomputing.com>
Subject: RE: [PATCH v3 00/14] arm64: Support for running as a guest in Arm CCA

From: Catalin Marinas <catalin.marinas@....com> Sent: Friday, June 7, 2024 8:13 AM
> 
> On Fri, Jun 07, 2024 at 01:38:15AM +0000, Michael Kelley wrote:
> > From: Steven Price <steven.price@....com> Sent: Wednesday, June 5, 2024 2:30 AM
> > > This series adds support for running Linux in a protected VM under the
> > > Arm Confidential Compute Architecture (CCA). This has been updated
> > > following the feedback from the v2 posting[1]. Thanks for the feedback!
> > > Individual patches have a change log for v3.
> > >
> > > The biggest change from v2 is fixing set_memory_{en,de}crypted() to
> > > perform a break-before-make sequence. Note that only the virtual address
> > > supplied is flipped between shared and protected, so if e.g. a vmalloc()
> > > address is passed the linear map will still point to the (now invalid)
> > > previous IPA. Attempts to access the wrong address may trigger a
> > > Synchronous External Abort. However any code which attempts to access
> > > the 'encrypted' alias after set_memory_decrypted() is already likely to
> > > be broken on platforms that implement memory encryption, so I don't
> > > expect problems.
> >
> > In the case of a vmalloc() address, load_unaligned_zeropad() could still
> > make an access to the underlying pages through the linear address. In
> > CoCo guests on x86, both the vmalloc PTE and the linear map PTE are
> > flipped, so the load_unaligned_zeropad() problem can occur only during
> > the transition between decrypted and encrypted. But even then, the
> > exception handlers have code to fixup this case and allow everything to
> > proceed normally.
> >
> > I haven't looked at the code in your patches, but do you handle that case,
> > or somehow prevent it?
> 
> If we can guarantee that only full a vm_struct area is changed at a
> time, the vmap guard page would prevent this issue (not sure we can
> though). Otherwise I think we either change the set_memory_*() code to
> deal with the other mappings or we handle the exception.

I don't think the vmap guard pages help. The vmalloc() memory consists
of individual pages that are scattered throughout the direct map. The stray
reference from load_unaligned_zeropad() will originate in a kmalloc'ed
memory page that precedes one of these scattered individual pages, and
will use a direct map kernel vaddr.  So the guard page in vmalloc space don't
come into play. At least in the Hyper-V use case, an entire vmalloc allocation
*is* flipped as a unit, so the guard pages do prevent a stray reference from
load_unaligned_zeropad() that originates in vmalloc space. At one
point I looked to see if load_unaligned_zeropad() is ever used on vmalloc
addresses.  I think the answer was "no",  making the guard page question
moot, but I'm not sure. :-(

Another thought: The use of load_unaligned_zeropad() is conditional on
CONFIG_DCACHE_WORD_ACCESS. There are #ifdef'ed alternate
implementations that don't use load_unaligned_zeropad() if it is not
enabled. I looked at just disabling it in CoCo VMs, but I don't know the
performance impact. I speculated that the benefits were more noticeable
in processors from a decade or more ago, and perhaps less so now, but
never did any measurements. There was also a snag in that x86-only
code has a usage of load_unaligned_zeropad() without an alternate
implementation, so I never went fully down that path. But arm64 would
probably "just work" if it were disabled.

> 
> We also have potential user mappings, do we need to do anything about
> them?

I'm unclear on the scenario here.  Would memory with a user mapping
ever be flipped between decrypted and encrypted while the user mapping
existed?  I don't recall being concerned about user mappings, so maybe
had ruled out that scenario. On x86, flipping between decrypted and
encrypted may effectively change the contents of the memory, so doing
a flip while mapped into user space seems problematic. But maybe I'm
missing something.

Michael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ