lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Message-ID: <BC75A7E8-8870-4E71-9F81-0309CF4853FB@zytor.com>
Date:   Sat, 15 May 2021 14:07:01 -0700
From:   "H. Peter Anvin" <hpa@...or.com>
To:     Andy Lutomirski <luto@...nel.org>, Ingo Molnar <mingo@...hat.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Borislav Petkov <bp@...en8.de>
CC:     Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 4/4] x86/syscall: use int everywhere for system call numbers

Pretty soon you'll have the whole entry code rewritten back into assembly 😆

On May 15, 2021 11:48:22 AM PDT, Andy Lutomirski <luto@...nel.org> wrote:
>
>
>On Sat, May 15, 2021, at 10:42 AM, H. Peter Anvin wrote:
>> Answer: I don't think it is a good idea to have the system can table
>offset ... it seems like an unnecessary debugging headache.
>
>Emit it in asm:
>
>table_minus_one:
> .quad not_a_syscall
>table:
> (real table here)
>
>/me runs.
>
>
>
>> 
>> On May 15, 2021 8:37:12 AM PDT, Andy Lutomirski <luto@...nel.org
><mailto:luto%40kernel.org>> wrote:
>> >On 5/14/21 6:10 PM, H. Peter Anvin wrote:
>> >> From: "H. Peter Anvin (Intel)" <hpa@...or.com
><mailto:hpa%40zytor.com>>
>> >> 
>> >> System call numbers are defined as int, so use int everywhere for
>> >> system call numbers. This patch is strictly a cleanup; it should
>not
>> >> change anything user visible; all ABI changes have been done in
>the
>> >> preceeding patches.
>> >> 
>> >> Signed-off-by: H. Peter Anvin (Intel) <hpa@...or.com
><mailto:hpa%40zytor.com>>
>> >> ---
>> >>  arch/x86/entry/common.c        | 93
>> >++++++++++++++++++++++++----------
>> >>  arch/x86/include/asm/syscall.h |  2 +-
>> >>  2 files changed, 66 insertions(+), 29 deletions(-)
>> >> 
>> >> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> >> index f51bc17262db..714804f0970c 100644
>> >> --- a/arch/x86/entry/common.c
>> >> +++ b/arch/x86/entry/common.c
>> >> @@ -36,49 +36,87 @@
>> >>  #include <asm/irq_stack.h>
>> >>  
>> >>  #ifdef CONFIG_X86_64
>> >> -__visible noinstr void do_syscall_64(struct pt_regs *regs,
>unsigned
>> >long nr)
>> >> +
>> >> +static __always_inline bool do_syscall_x64(struct pt_regs *regs,
>int
>> >nr)
>> >> +{
>> >> + /*
>> >> + * Convert negative numbers to very high and thus out of range
>> >> + * numbers for comparisons. Use unsigned long to slightly
>> >> + * improve the array_index_nospec() generated code.
>> >> + */
>> >> + unsigned long unr = nr;
>> >> +
>> >> + if (likely(unr < NR_syscalls)) {
>> >> + unr = array_index_nospec(unr, NR_syscalls);
>> >> + regs->ax = sys_call_table[unr](regs);
>> >> + return true;
>> >> + }
>> >> + return false;
>> >> +}
>> >
>> >How much do you like micro-optimization?  You could be silly^Wclever
>> >and
>> >add a new syscall handler:
>> >
>> >long skip_syscall(struct pt_regs *regs)
>> >{
>> > return regs->ax;
>> >}
>> >
>> >and prepend this to the syscall tables -- it would be a sort-of-real
>> >syscall -1.  Then the call sequence becomes:
>> >
>> >int adjusted_nr = nr + 1 (or nr - x32bit + 1);
>> >
>> >if (likely(nr < NR_adjusted_syscalls)) {
>> >   unr = array_index_nospec...;
>> >   regs->ax = sys_call_table[unr](regs);  /* might be a no-op! */
>> >} else {
>> >    regs->ax = -ENOSYS;
>> >}
>> >
>> >which removes a branch from the fast path.
>> 
>> -- 
>> Sent from my Android device with K-9 Mail. Please excuse my brevity.
>> 

-- 
Sent from my Android device with K-9 Mail. Please excuse my brevity.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ