[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <39A1C149-DA03-46D1-801F-0205DCD69A36@amacapital.net>
Date: Mon, 23 Jul 2018 14:50:50 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Pavel Machek <pavel@....cz>
Cc: Linus Torvalds <torvalds@...ux-foundation.org>,
Joerg Roedel <joro@...tes.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...nel.org>, Peter Anvin <hpa@...or.com>,
the arch/x86 maintainers <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
linux-mm <linux-mm@...ck.org>,
Andrew Lutomirski <luto@...nel.org>,
Dave Hansen <dave.hansen@...el.com>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Jürgen Groß <jgross@...e.com>,
Peter Zijlstra <peterz@...radead.org>,
Borislav Petkov <bp@...en8.de>, Jiri Kosina <jkosina@...e.cz>,
Boris Ostrovsky <boris.ostrovsky@...cle.com>,
Brian Gerst <brgerst@...il.com>,
David Laight <David.Laight@...lab.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
Eduardo Valentin <eduval@...zon.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Will Deacon <will.deacon@....com>,
"Liguori, Anthony" <aliguori@...zon.com>,
Daniel Gruss <daniel.gruss@...k.tugraz.at>,
Hugh Dickins <hughd@...gle.com>,
Kees Cook <keescook@...gle.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Waiman Long <llong@...hat.com>,
"David H . Gutteridge" <dhgutteridge@...patico.ca>,
Joerg Roedel <jroedel@...e.de>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Jiri Olsa <jolsa@...hat.com>,
Namhyung Kim <namhyung@...nel.org>
Subject: Re: [PATCH 0/3] PTI for x86-32 Fixes and Updates
> On Jul 23, 2018, at 2:38 PM, Pavel Machek <pavel@....cz> wrote:
>
>> On Mon 2018-07-23 12:00:08, Linus Torvalds wrote:
>>> On Mon, Jul 23, 2018 at 7:09 AM Pavel Machek <pavel@....cz> wrote:
>>>
>>> Meanwhile... it looks like gcc is not slowed down significantly, but
>>> other stuff sees 30% .. 40% slowdowns... which is rather
>>> significant.
>>
>> That is more or less expected.
>>
>> Gcc spends about 90+% of its time in user space, and the system calls
>> it *does* do tend to be "real work" (open/read/etc). And modern gcc's
>> no longer have the pipe between cpp and cc1, so they don't have that
>> issue either (which would have sjhown the PTI slowdown a lot more)
>>
>> Some other loads will do a lot more time traversing the user/kernel
>> boundary, and in 32-bit mode you won't be able to take advantage of
>> the address space ID's, so you really get the full effect.
>
> Understood. Just -- bzip2 should include quite a lot of time in
> userspace, too.
>
>>> Would it be possible to have per-process control of kpti? I have
>>> some processes where trading of speed for security would make sense.
>>
>> That was pretty extensively discussed, and no sane model for it was
>> ever agreed upon. Some people wanted it per-thread, others per-mm,
>> and it wasn't clear how to set it either and how it should inherit
>> across fork/exec, and what the namespace rules etc should be.
>>
>> You absolutely need to inherit it (so that you can say "I trust this
>> session" or whatever), but at the same time you *don't* want to
>> inherit if you have a server you trust that then spawns user processes
>> (think "I want systemd to not have the overhead, but the user
>> processes it spawns obviously do need protection").
>>
>> It was just a morass. Nothing came out of it. I guess people can
>> discuss it again, but it's not simple.
>
> I agree it is not easy. OTOH -- 30% of user-visible performance is a
> _lot_. That is worth spending man-years on... Ok, problem is not as
> severe on modern CPUs with address space ID's, but...
>
> What I want is "if A can ptrace B, and B has pti disabled, A can have
> pti disabled as well". Now.. I see someone may want to have it
> per-thread, because for stuff like javascript JIT, thread may have
> rights to call ptrace, but is unable to call ptrace because JIT
> removed that ability... hmm...
No, you don’t want that. The problem is that Meltdown isn’t a problem that exists in isolation. It’s very plausible that JavaScript code could trigger a speculation attack that, with PTI off, could read kernel memory.
Powered by blists - more mailing lists