lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1532281180.git.luto@kernel.org>
Date:   Sun, 22 Jul 2018 10:45:25 -0700
From:   Andy Lutomirski <luto@...nel.org>
To:     x86@...nel.org, LKML <linux-kernel@...r.kernel.org>
Cc:     Borislav Petkov <bp@...en8.de>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Dave Hansen <dave.hansen@...ux.intel.com>,
        Andy Lutomirski <luto@...nel.org>
Subject: [RFC 0/2] Get rid of the entry trampoline

Hi all-

I think there's general agreement that the current entry trampoline
sucks, mainly because it's dog-slow.  Thanks, Spectre.

There are three possible fixes I know of:

a) Linus' hack: use R11 for scratch space.  This doesn't actually
   speed it up, but it improves the addressing situation a bit.
   I don't like it, though: it causes the SYSCALL64 path to forget
   the arithmetic flags and all of the MSR_SYCALL_MASK flags.  The
   latter may be a showstopper, given that we've seen some evidence
   of nasty Wine use cases that expect setting EFLAGS.NT and doing
   a system call to actually do something intelligent.  Similarly,
   there could easily be user programs out there that set AC because
   they want alignment checking and expect AC to remain set across
   system calls.

b) Move the trampoline within 2G of the entry text and copy it for
   each CPU.  This is certainly possible, but it's a bit gross,
   and it uses num_possible_cpus() * 64 bytes of memory (rounded
   up to a page).  It will also result in more complicated code.

c) This series.  Just make %gs work in the entry trampoline.  It's
   actually a net code deletion.

I suspect that (b) would be faster in code that does a lot of system
calls and doesn't totally blow away the cache or the TLB between
system calls.  I suspect that (c) is faster in code that does
cache-cold system calls.

Andy Lutomirski (2):
  x86/entry/64: Use the TSS sp2 slot for rsp_scratch
  x86/pti/64: Remove the SYSCALL64 entry trampoline

 arch/x86/entry/entry_64.S          | 66 +-----------------------------
 arch/x86/include/asm/processor.h   |  5 +++
 arch/x86/include/asm/thread_info.h |  1 +
 arch/x86/kernel/asm-offsets_64.c   |  1 +
 arch/x86/kernel/cpu/common.c       | 11 +----
 arch/x86/kernel/kprobes/core.c     | 10 +----
 arch/x86/kernel/process_64.c       |  2 -
 arch/x86/kernel/vmlinux.lds.S      | 10 -----
 arch/x86/mm/cpu_entry_area.c       |  5 ---
 arch/x86/mm/pti.c                  | 24 ++++++++++-
 10 files changed, 33 insertions(+), 102 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ