[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <alpine.DEB.2.21.1809072145460.1402@nanos.tec.linutronix.de>
Date: Fri, 7 Sep 2018 21:54:11 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Andy Lutomirski <luto@...nel.org>
cc: Peter Zijlstra <peterz@...radead.org>, X86 ML <x86@...nel.org>,
Borislav Petkov <bp@...en8.de>,
LKML <linux-kernel@...r.kernel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Joerg Roedel <joro@...tes.org>, Jiri Olsa <jolsa@...hat.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH v2 3/3] x86/pti/64: Remove the SYSCALL64 entry
trampoline
On Wed, 5 Sep 2018, Andy Lutomirski wrote:
> On Tue, Sep 4, 2018 at 12:04 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > Can we have a few words on why this solution and not this alternative? I
> > mean, you raise the possibility, but then surely you chose not to
> > implement that. Might as well share that with us.
>
> I can give some pros and cons. With the other approach:
>
> - We avoid a pipeline stall.
Which is good.
> - We execute from an extra page and read from another extra page
> during the syscall. (The latter is because we need to use a relative
> addressing mode to find sp1 -- it's the same *cacheline* we'd use
> anyway, but we're accessing it using an alias, so it's an extra TLB
> entry.)
Ok, but is this really an issue with PTI?
> - We use more memory. This would be one page per CPU for a simple
> implementation and 64-ish bytes per CPU or one page per node for a
> more complex implementation.
That's the least interesting argument really.
> - More code complexity.
Ok, but how much complex code is that?
> I'm not convinced this is a good tradeoff.
Well, the real question here is whether this has any advantage vs. the
percpu area exposure?
Thanks,
tglx
Powered by blists - more mailing lists