[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <cover.1562185330.git.luto@kernel.org>
Date: Wed, 3 Jul 2019 13:34:01 -0700
From: Andy Lutomirski <luto@...nel.org>
To: LKML <linux-kernel@...r.kernel.org>
Cc: x86@...nel.org, Borislav Petkov <bp@...en8.de>,
Peter Zijlstra <peterz@...radead.org>,
Andy Lutomirski <luto@...nel.org>
Subject: [PATCH 0/4] x32 and compat syscall improvements
This series contains a couple of minor cleanups and a major change
to the way that x32 syscalls work. We currently have a range of
syscall numbers starting at 512 that are rather annoying -- they've
been known to cause security problems for seccomp filter authors who
don't know about them, and they cause people to think that x86_64
will run out of syscall numbers after 511 due to a conflict with
x32.
With this series applied, 512-547 can be just a silly legacy oddity
just like all the other silly legacy oddities we have, and we can go
on with our lives without kludges starting at 548 :)
Andy Lutomirski (4):
x86/syscalls: Use the compat versions of rt_sigsuspend() and
rt_sigprocmask()
x86/syscalls: Disallow compat entries for all types of 64-bit syscalls
x86/syscalls: Split the x32 syscalls into their own table
x86/syscalls: Make __X32_SYSCALL_BIT be unsigned long
arch/x86/entry/common.c | 13 +--
arch/x86/entry/syscall_64.c | 25 ++++++
arch/x86/entry/syscalls/syscall_32.tbl | 4 +-
arch/x86/entry/syscalls/syscalltbl.sh | 35 ++++----
arch/x86/include/asm/syscall.h | 4 +
arch/x86/include/asm/unistd.h | 6 --
arch/x86/include/uapi/asm/unistd.h | 2 +-
arch/x86/kernel/asm-offsets_64.c | 20 +++++
tools/testing/selftests/x86/Makefile | 2 +-
.../testing/selftests/x86/syscall_numbering.c | 89 +++++++++++++++++++
10 files changed, 168 insertions(+), 32 deletions(-)
create mode 100644 tools/testing/selftests/x86/syscall_numbering.c
--
2.21.0
Powered by blists - more mailing lists