[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4a3809e8-61b2-4341-a868-292ba6e64e8a@sirena.org.uk>
Date: Wed, 21 Feb 2024 17:36:12 +0000
From: Mark Brown <broonie@...nel.org>
To: "dalias@...c.org" <dalias@...c.org>
Cc: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"suzuki.poulose@....com" <suzuki.poulose@....com>,
"Szabolcs.Nagy@....com" <Szabolcs.Nagy@....com>,
"musl@...ts.openwall.com" <musl@...ts.openwall.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"kvmarm@...ts.linux.dev" <kvmarm@...ts.linux.dev>,
"corbet@....net" <corbet@....net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>,
"palmer@...belt.com" <palmer@...belt.com>,
"debug@...osinc.com" <debug@...osinc.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
"shuah@...nel.org" <shuah@...nel.org>,
"arnd@...db.de" <arnd@...db.de>, "maz@...nel.org" <maz@...nel.org>,
"oleg@...hat.com" <oleg@...hat.com>,
"fweimer@...hat.com" <fweimer@...hat.com>,
"keescook@...omium.org" <keescook@...omium.org>,
"james.morse@....com" <james.morse@....com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"will@...nel.org" <will@...nel.org>,
"brauner@...nel.org" <brauner@...nel.org>,
"hjl.tools@...il.com" <hjl.tools@...il.com>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
"ardb@...nel.org" <ardb@...nel.org>,
"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"thiago.bauermann@...aro.org" <thiago.bauermann@...aro.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"sorear@...tmail.com" <sorear@...tmail.com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [musl] Re: [PATCH v8 00/38] arm64/gcs: Provide support for GCS
in userspace
On Wed, Feb 21, 2024 at 09:58:01AM -0500, dalias@...c.org wrote:
> On Wed, Feb 21, 2024 at 01:53:10PM +0000, Mark Brown wrote:
> > On Tue, Feb 20, 2024 at 08:27:37PM -0500, dalias@...c.org wrote:
> > > On Wed, Feb 21, 2024 at 12:35:48AM +0000, Edgecombe, Rick P wrote:
> > > > (INCSSP, RSTORSSP, etc). These are a collection of instructions that
> > > > allow limited control of the SSP. When shadow stack gets disabled,
> > > > these suddenly turn into #UD generating instructions. So any other
> > > > threads executing those instructions when shadow stack got disabled
> > > > would be in for a nasty surprise.
> > > This is the kernel's problem if that's happening. It should be
> > > trapping these and returning immediately like a NOP if shadow stack
> > > has been disabled, not generating SIGILL.
> > I'm not sure that's going to work out well, all it takes is some code
> > that's looking at the shadow stack and expecting something to happen as
> > a result of the instructions it's executing and we run into trouble. A
> I said NOP but there's no reason it strictly needs to be a NOP. It
> could instead do something reasonable to convey the state of racing
> with shadow stack being disabled.
This feels like it's getting complicated and I fear it may be an uphill
struggle to get such code merged, at least for arm64. My instinct is
that it's going to be much more robust and generally tractable to let
things run to some suitable synchronisation point and then disable
there, but if we're going to do that then userspace can hopefully
arrange to do the disabling itself through the standard disable
interface anyway. Presumably it'll want to notice things being disabled
at some point anyway? TBH that's been how all the prior proposals for
process wide disable I've seen were done.
Download attachment "signature.asc" of type "application/pgp-signature" (489 bytes)
Powered by blists - more mailing lists