[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1556289889.2833.17.camel@HansenPartnership.com>
Date: Fri, 26 Apr 2019 07:44:49 -0700
From: James Bottomley <James.Bottomley@...senPartnership.com>
To: Ingo Molnar <mingo@...nel.org>, Mike Rapoport <rppt@...ux.ibm.com>
Cc: linux-kernel@...r.kernel.org,
Alexandre Chartre <alexandre.chartre@...cle.com>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
Jonathan Adams <jwadams@...gle.com>,
Kees Cook <keescook@...omium.org>,
Paul Turner <pjt@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org,
linux-security-module@...r.kernel.org, x86@...nel.org,
Linus Torvalds <torvalds@...ux-foundation.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [RFC PATCH 2/7] x86/sci: add core implementation for system
call isolation
On Fri, 2019-04-26 at 10:31 +0200, Ingo Molnar wrote:
> * Mike Rapoport <rppt@...ux.ibm.com> wrote:
>
> > When enabled, the system call isolation (SCI) would allow execution
> > of the system calls with reduced page tables. These page tables are
> > almost identical to the user page tables in PTI. The only addition
> > is the code page containing system call entry function that will
> > continue exectution after the context switch.
> >
> > Unlike PTI page tables, there is no sharing at higher levels and
> > all the hierarchy for SCI page tables is cloned.
> >
> > The SCI page tables are created when a system call that requires
> > isolation is executed for the first time.
> >
> > Whenever a system call should be executed in the isolated
> > environment, the context is switched to the SCI page tables. Any
> > further access to the kernel memory will generate a page fault. The
> > page fault handler can verify that the access is safe and grant it
> > or kill the task otherwise.
> >
> > The initial SCI implementation allows access to any kernel data,
> > but it limits access to the code in the following way:
> > * calls and jumps to known code symbols without offset are allowed
> > * calls and jumps into a known symbol with offset are allowed only
> > if that symbol was already accessed and the offset is in the next
> > page
> > * all other code access are blocked
> >
> > After the isolated system call finishes, the mappings created
> > during its execution are cleared.
> >
> > The entire SCI page table is lazily freed at task exit() time.
>
> So this basically uses a similar mechanism to the horrendous PTI CR3
> switching overhead whenever a syscall seeks "protection", which
> overhead is only somewhat mitigated by PCID.
>
> This might work on PTI-encumbered CPUs.
>
> While AMD CPUs don't need PTI, nor do they have PCID.
>
> So this feature is hurting the CPU maker who didn't mess up, and is
> hurting future CPUs that don't need PTI ..
>
> I really don't like it where this is going. In a couple of years I
> really want to be able to think of PTI as a bad dream that is mostly
> over fortunately.
Perhaps ROP gadgets were a bad first example. The research object of
the current patch set is really to investigate eliminating sandboxing
for containers. As you know, current sandboxes like gVisor and Nabla
try to reduce the exposure to horizontal exploits (ability of an
untrusted tenant to exploit the shared kernel to attack another tenant)
by running significant chunks of kernel emulation code in userspace to
reduce exposure of the tenant to code in the shared kernel. The price
paid for this is pretty horrendous in performance terms, but the
benefit is multi-tenant safety.
The question we were looking into is if we used per-tenant in-kernel
address space isolation to improve the security of kernel system calls
such that either the exploit becomes detectable or its consequences
bounce back only on the tenant trying the exploit, we could eliminate
the emulation for that system call and instead pass it through to the
kernel, thus thinning out the sandbox layer without losing the security
benefits.
We are looking at other aspects as well, like can we simply run chunks
of the kernel in the user's address space as the sanbox emulation
currently does, or can we hide a tenant's data objects such that
they're not easily accessible from an exploited kernel.
James
Powered by blists - more mailing lists