[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20180907123651.GZ24106@hirez.programming.kicks-ass.net>
Date: Fri, 7 Sep 2018 14:36:51 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Andy Lutomirski <luto@...nel.org>
Cc: X86 ML <x86@...nel.org>, Borislav Petkov <bp@...en8.de>,
LKML <linux-kernel@...r.kernel.org>,
Dave Hansen <dave.hansen@...ux.intel.com>,
Adrian Hunter <adrian.hunter@...el.com>,
Alexander Shishkin <alexander.shishkin@...ux.intel.com>,
Arnaldo Carvalho de Melo <acme@...nel.org>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Joerg Roedel <joro@...tes.org>, Jiri Olsa <jolsa@...hat.com>,
Andi Kleen <ak@...ux.intel.com>
Subject: Re: [PATCH v2 3/3] x86/pti/64: Remove the SYSCALL64 entry trampoline
On Wed, Sep 05, 2018 at 02:31:28PM -0700, Andy Lutomirski wrote:
> On Tue, Sep 4, 2018 at 12:04 AM, Peter Zijlstra <peterz@...radead.org> wrote:
> > On Mon, Sep 03, 2018 at 03:59:44PM -0700, Andy Lutomirski wrote:
> >> There is a possible alternative approach: we could instead move the
> >> trampoline within 2G of the entry text and make a separate copy for
> >> each CPU. Then we could use a direct jump to rejoin the normal
> >> entry path.
> >
> > Can we have a few words on why this solution and not this alternative? I
> > mean, you raise the possibility, but then surely you chose not to
> > implement that. Might as well share that with us.
>
> I can give some pros and cons. With the other approach:
>
> - We avoid a pipeline stall.
> - We execute from an extra page and read from another extra page
> during the syscall. (The latter is because we need to use a relative
> addressing mode to find sp1 -- it's the same *cacheline* we'd use
> anyway, but we're accessing it using an alias, so it's an extra TLB
> entry.)
> - We use more memory. This would be one page per CPU for a simple
> implementation and 64-ish bytes per CPU or one page per node for a
> more complex implementation.
> - More code complexity.
>
> I'm not convinced this is a good tradeoff.
Fair enough, thanks!
Powered by blists - more mailing lists