[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20240221191039.GV4163@brightrain.aerifal.cx>
Date: Wed, 21 Feb 2024 14:10:39 -0500
From: "dalias@...c.org" <dalias@...c.org>
To: Mark Brown <broonie@...nel.org>
Cc: "Edgecombe, Rick P" <rick.p.edgecombe@...el.com>,
"linux-arch@...r.kernel.org" <linux-arch@...r.kernel.org>,
"suzuki.poulose@....com" <suzuki.poulose@....com>,
"Szabolcs.Nagy@....com" <Szabolcs.Nagy@....com>,
"musl@...ts.openwall.com" <musl@...ts.openwall.com>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-riscv@...ts.infradead.org" <linux-riscv@...ts.infradead.org>,
"kvmarm@...ts.linux.dev" <kvmarm@...ts.linux.dev>,
"corbet@....net" <corbet@....net>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
"catalin.marinas@....com" <catalin.marinas@....com>,
"oliver.upton@...ux.dev" <oliver.upton@...ux.dev>,
"palmer@...belt.com" <palmer@...belt.com>,
"debug@...osinc.com" <debug@...osinc.com>,
"aou@...s.berkeley.edu" <aou@...s.berkeley.edu>,
"shuah@...nel.org" <shuah@...nel.org>,
"arnd@...db.de" <arnd@...db.de>, "maz@...nel.org" <maz@...nel.org>,
"oleg@...hat.com" <oleg@...hat.com>,
"fweimer@...hat.com" <fweimer@...hat.com>,
"keescook@...omium.org" <keescook@...omium.org>,
"james.morse@....com" <james.morse@....com>,
"ebiederm@...ssion.com" <ebiederm@...ssion.com>,
"will@...nel.org" <will@...nel.org>,
"brauner@...nel.org" <brauner@...nel.org>,
"hjl.tools@...il.com" <hjl.tools@...il.com>,
"linux-kselftest@...r.kernel.org" <linux-kselftest@...r.kernel.org>,
"paul.walmsley@...ive.com" <paul.walmsley@...ive.com>,
"ardb@...nel.org" <ardb@...nel.org>,
"linux-arm-kernel@...ts.infradead.org" <linux-arm-kernel@...ts.infradead.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"thiago.bauermann@...aro.org" <thiago.bauermann@...aro.org>,
"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
"sorear@...tmail.com" <sorear@...tmail.com>,
"linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: Re: [musl] Re: [PATCH v8 00/38] arm64/gcs: Provide support for GCS
in userspace
On Wed, Feb 21, 2024 at 06:32:20PM +0000, Mark Brown wrote:
> On Wed, Feb 21, 2024 at 12:57:19PM -0500, dalias@...c.org wrote:
> > On Wed, Feb 21, 2024 at 05:36:12PM +0000, Mark Brown wrote:
>
> > > This feels like it's getting complicated and I fear it may be an uphill
> > > struggle to get such code merged, at least for arm64. My instinct is
> > > that it's going to be much more robust and generally tractable to let
> > > things run to some suitable synchronisation point and then disable
> > > there, but if we're going to do that then userspace can hopefully
> > > arrange to do the disabling itself through the standard disable
> > > interface anyway. Presumably it'll want to notice things being disabled
> > > at some point anyway? TBH that's been how all the prior proposals for
> > > process wide disable I've seen were done.
>
> > If it's possible to disable per-thread rather than per-process, some
> > things are easier. Disabling on account of using alt stacks only needs
>
> Both x86 and arm64 currently track shadow stack enablement per thread,
> not per process, so it's not just possible to do per thread it's the
> only thing we're currently implementing. I think the same is true for
> RISC-V but I didn't look as closely at that yet.
That's nice! It allows still keeping part of the benefit of SS in
programs which have some threads running with custom stacks. We do
however need a global-disable option for dlopen. In musl this could be
done via the same mechanism ("synccall") used for set*id -- it's
basically userspace IPI. But just having a native operation would be
nicer, and would probably help glibc where I don't think they
abstracted their set*id mechanism to do other things like this.
> > If folks on the kernel side are not going to be amenable to doing the
> > things that are easy for the kernel to make it work without breaking
> > compatibility with existing interfaces, but that are impossible or
> > near-impossible for userspace to do, this seems like a dead-end. And I
> > suspect an operation to "disable shadow stack, but without making
> > threads still in SS-critical sections crash" is going to be
> > necessary..
>
> Could you be more specific as to the easy things that you're referencing
> here?
Basically the ARCH_SHSTK_SUPPRESS_UD proposal.
Rich
Powered by blists - more mailing lists