[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAG48ez0OjQpCvO1EqUqtHX+zVj27p3yWd5riY_r7+rNWwec_OQ@mail.gmail.com>
Date: Mon, 4 May 2020 20:03:25 +0200
From: Jann Horn <jannh@...gle.com>
To: Will Deacon <will@...nel.org>
Cc: Sami Tolvanen <samitolvanen@...gle.com>,
Kees Cook <keescook@...omium.org>,
Catalin Marinas <catalin.marinas@....com>,
James Morse <james.morse@....com>,
Steven Rostedt <rostedt@...dmis.org>,
Ard Biesheuvel <ard.biesheuvel@...aro.org>,
Mark Rutland <mark.rutland@....com>,
Masahiro Yamada <masahiroy@...nel.org>,
Michal Marek <michal.lkml@...kovi.net>,
Ingo Molnar <mingo@...hat.com>,
Peter Zijlstra <peterz@...radead.org>,
Juri Lelli <juri.lelli@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dave Martin <Dave.Martin@....com>,
Laura Abbott <labbott@...hat.com>,
Marc Zyngier <maz@...nel.org>,
Masami Hiramatsu <mhiramat@...nel.org>,
Nick Desaulniers <ndesaulniers@...gle.com>,
Miguel Ojeda <miguel.ojeda.sandonis@...il.com>,
clang-built-linux <clang-built-linux@...glegroups.com>,
Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Linux ARM <linux-arm-kernel@...ts.infradead.org>,
kernel list <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v11 01/12] add support for Clang's Shadow Call Stack (SCS)
On Mon, May 4, 2020 at 6:52 PM Will Deacon <will@...nel.org> wrote:
> On Mon, Apr 27, 2020 at 01:45:46PM -0700, Sami Tolvanen wrote:
> > On Fri, Apr 24, 2020 at 12:21:14PM +0100, Will Deacon wrote:
> > > Also, since you mentioned the lack of redzoning, isn't it a bit dodgy
> > > allocating blindly out of the kmem_cache? It means we don't have a redzone
> > > or a guard page, so if you can trigger something like a recursion bug then
> > > could you scribble past the SCS before the main stack overflows? Would this
> > > clobber somebody else's SCS?
> >
> > I agree that allocating from a kmem_cache isn't ideal for safety. It's a
> > compromise to reduce memory overhead.
>
> Do you think it would be a problem if we always allocated a page for the
> SCS?
I guess doing this safely and without wasting a page per task would
only be possible in an elegant way once MTE lands on devices?
I wonder how bad context switch latency would be if the actual SCS was
percpu and vmapped (starting at an offset inside the page such that
the SCS can only grow up to something like 0x400 bytes before
panicking the CPU) and the context switch path saved/restored the used
part of the vmapped SCS into a smaller allocation from the slab
allocator... presumably the SCS will usually just be something like
one cacheline big? That probably only costs a moderate amount of time
to copy...
Or as an extension of that, if the SCS copying turns out to be too
costly, there could be a percpu LRU cache consisting of vmapped SCS
pages, and whenever a task gets scheduled that doesn't have a vmapped
SCS, it "swaps out" the contents of the least recently used vmapped
SCS into the corresponding task's slab SCS, and "swaps in" from its
own slab SCS into the vmapped SCS. And task migration would force
"swapping out".
Not sure if this is a good idea, or if I'm just making things worse by
suggesting extra complexity...
Powered by blists - more mailing lists