[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CF6CE3D3-7DA4-45E8-A593-172DFD6A8EEA@zytor.com>
Date: Sat, 15 May 2021 10:41:34 -0700
From: "H. Peter Anvin" <hpa@...or.com>
To: Andy Lutomirski <luto@...nel.org>, Ingo Molnar <mingo@...hat.com>,
Thomas Gleixner <tglx@...utronix.de>,
Borislav Petkov <bp@...en8.de>
CC: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v3 4/4] x86/syscall: use int everywhere for system call numbers
Heh. You have reinvented the original implementation.
On May 15, 2021 8:37:12 AM PDT, Andy Lutomirski <luto@...nel.org> wrote:
>On 5/14/21 6:10 PM, H. Peter Anvin wrote:
>> From: "H. Peter Anvin (Intel)" <hpa@...or.com>
>>
>> System call numbers are defined as int, so use int everywhere for
>> system call numbers. This patch is strictly a cleanup; it should not
>> change anything user visible; all ABI changes have been done in the
>> preceeding patches.
>>
>> Signed-off-by: H. Peter Anvin (Intel) <hpa@...or.com>
>> ---
>> arch/x86/entry/common.c | 93
>++++++++++++++++++++++++----------
>> arch/x86/include/asm/syscall.h | 2 +-
>> 2 files changed, 66 insertions(+), 29 deletions(-)
>>
>> diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c
>> index f51bc17262db..714804f0970c 100644
>> --- a/arch/x86/entry/common.c
>> +++ b/arch/x86/entry/common.c
>> @@ -36,49 +36,87 @@
>> #include <asm/irq_stack.h>
>>
>> #ifdef CONFIG_X86_64
>> -__visible noinstr void do_syscall_64(struct pt_regs *regs, unsigned
>long nr)
>> +
>> +static __always_inline bool do_syscall_x64(struct pt_regs *regs, int
>nr)
>> +{
>> + /*
>> + * Convert negative numbers to very high and thus out of range
>> + * numbers for comparisons. Use unsigned long to slightly
>> + * improve the array_index_nospec() generated code.
>> + */
>> + unsigned long unr = nr;
>> +
>> + if (likely(unr < NR_syscalls)) {
>> + unr = array_index_nospec(unr, NR_syscalls);
>> + regs->ax = sys_call_table[unr](regs);
>> + return true;
>> + }
>> + return false;
>> +}
>
>How much do you like micro-optimization? You could be silly^Wclever
>and
>add a new syscall handler:
>
>long skip_syscall(struct pt_regs *regs)
>{
> return regs->ax;
>}
>
>and prepend this to the syscall tables -- it would be a sort-of-real
>syscall -1. Then the call sequence becomes:
>
>int adjusted_nr = nr + 1 (or nr - x32bit + 1);
>
>if (likely(nr < NR_adjusted_syscalls)) {
> unr = array_index_nospec...;
> regs->ax = sys_call_table[unr](regs); /* might be a no-op! */
>} else {
> regs->ax = -ENOSYS;
>}
>
>which removes a branch from the fast path.
--
Sent from my Android device with K-9 Mail. Please excuse my brevity.
Powered by blists - more mailing lists