[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <627d9321-466f-c4ed-c658-6b8567648dc6@intel.com>
Date: Fri, 26 Apr 2019 07:46:18 -0700
From: Dave Hansen <dave.hansen@...el.com>
To: Mike Rapoport <rppt@...ux.ibm.com>, linux-kernel@...r.kernel.org
Cc: Alexandre Chartre <alexandre.chartre@...cle.com>,
Andy Lutomirski <luto@...nel.org>,
Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>, Ingo Molnar <mingo@...hat.com>,
James Bottomley <James.Bottomley@...senpartnership.com>,
Jonathan Adams <jwadams@...gle.com>,
Kees Cook <keescook@...omium.org>,
Paul Turner <pjt@...gle.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>, linux-mm@...ck.org,
linux-security-module@...r.kernel.org, x86@...nel.org
Subject: Re: [RFC PATCH 2/7] x86/sci: add core implementation for system call
isolation
On 4/25/19 2:45 PM, Mike Rapoport wrote:
> After the isolated system call finishes, the mappings created during its
> execution are cleared.
Yikes. I guess that stops someone from calling write() a bunch of times
on every filesystem using every block device driver and all the DM code
to get a lot of code/data faulted in. But, it also means not even
long-running processes will ever have a chance of behaving anything
close to normally.
Is this something you think can be rectified or is there something
fundamental that would keep SCI page tables from being cached across
different invocations of the same syscall?
Powered by blists - more mailing lists