[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAK1hOcOjNfyrb3k5euNHDQdc-2kKKi2DkBev+LLkcVd4VUszyQ@mail.gmail.com>
Date: Tue, 10 Mar 2015 15:00:26 +0100
From: Denys Vlasenko <vda.linux@...glemail.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: Ingo Molnar <mingo@...nel.org>,
Denys Vlasenko <dvlasenk@...hat.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Steven Rostedt <rostedt@...dmis.org>,
Borislav Petkov <bp@...en8.de>,
"H. Peter Anvin" <hpa@...or.com>, Oleg Nesterov <oleg@...hat.com>,
Frederic Weisbecker <fweisbec@...il.com>,
Alexei Starovoitov <ast@...mgrid.com>,
Will Drewry <wad@...omium.org>,
Kees Cook <keescook@...omium.org>, X86 ML <x86@...nel.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 3/4] x86: save user rsp in pt_regs->sp on SYSCALL64 fastpath
On Tue, Mar 10, 2015 at 2:26 PM, Andy Lutomirski <luto@...capital.net> wrote:
> usersp is IMO tolerable. The nasty thing is the FIXUP_TOP_OF_STACK /
> RESTORE_TOP_OF_STACK garbage, and this patch is the main step toward
> killing that off completely. I've still never convinced myself that
> there aren't ptrace-related info leaks in there.
>
> Denys, did you ever benchmark what happens if we use push instead of
> mov? I bet that we get that cycle back and more, not to mention much
> less icache usage.
Yes, I did.
Push conversion seems to perform the same as current, MOV-based code.
The expected win there that we lose two huge 12-byte insns
which store __USER_CS and __USER_DS in iret frame.
MOVQ imm,ofs(%rsp) has a very unfortunate encoding in x86:
- needs REX prefix
- no sing-extending imm8 form exists for it
- ofs in our case can't fit into 8 bits
- (%esp) requires SIB byte
In my tests, each such instruction adds one cycle.
Compare this to PUSH imm8, which is 2 bytes only.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists