lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 9 Dec 2015 13:15:54 -0800
From:	Andy Lutomirski <luto@...capital.net>
To:	Brian Gerst <brgerst@...il.com>
Cc:	Andy Lutomirski <luto@...nel.org>,
	"the arch/x86 maintainers" <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Borislav Petkov <bp@...en8.de>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	Denys Vlasenko <dvlasenk@...hat.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: [PATCH] x86/entry/64: Remove duplicate syscall table for fast path

On Wed, Dec 9, 2015 at 1:08 PM, Brian Gerst <brgerst@...il.com> wrote:
> On Wed, Dec 9, 2015 at 1:53 PM, Andy Lutomirski <luto@...capital.net> wrote:
>> On Wed, Dec 9, 2015 at 5:02 AM, Brian Gerst <brgerst@...il.com> wrote:
>>> Instead of using a duplicate syscall table for the fast path, create stubs for
>>> the syscalls that need pt_regs that save the extra registers if a flag for the
>>> slow path is not set.
>>>
>>> Signed-off-by: Brian Gerst <brgerst@...il.com>
>>> To: Andy Lutomirski <luto@...capital.net>
>>> Cc: Andy Lutomirski <luto@...nel.org>
>>> Cc: the arch/x86 maintainers <x86@...nel.org>
>>> Cc: Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
>>> Cc: Borislav Petkov <bp@...en8.de>
>>> Cc: Frédéric Weisbecker <fweisbec@...il.com>
>>> Cc: Denys Vlasenko <dvlasenk@...hat.com>
>>> Cc: Linus Torvalds <torvalds@...ux-foundation.org>
>>> ---
>>>
>>> Applies on top of Andy's syscall cleanup series.
>>
>> A couple questions:
>>
>>> @@ -306,15 +306,37 @@ END(entry_SYSCALL_64)
>>>
>>>  ENTRY(stub_ptregs_64)
>>>         /*
>>> -        * Syscalls marked as needing ptregs that go through the fast path
>>> -        * land here.  We transfer to the slow path.
>>> +        * Syscalls marked as needing ptregs land here.
>>> +        * If we are on the fast path, we need to save the extra regs.
>>> +        * If we are on the slow path, the extra regs are already saved.
>>>          */
>>> -       DISABLE_INTERRUPTS(CLBR_NONE)
>>> -       TRACE_IRQS_OFF
>>> -       addq    $8, %rsp
>>> -       jmp     entry_SYSCALL64_slow_path
>>> +       movq    PER_CPU_VAR(cpu_current_top_of_stack), %r10
>>> +       testl   $TS_SLOWPATH, ASM_THREAD_INFO(TI_status, %r10, 0)
>>> +       jnz     1f
>>
>> OK (but see below), but why not do:
>>
>> addq $8, %rsp
>> jmp entry_SYSCALL64_slow_path
>
> I've always been adverse to doing things like that because it breaks
> call/return branch prediction.

I'd agree with you there except that the syscalls in question really
don't matter for performance enough that we should worry about a
handful of cycles from a return misprediction.  We're still avoiding
IRET regardless (to the extent possible), and that was always the
major factor.

> Also, are there any side effects to calling enter_from_user_mode()
> more than once?

A warning that invariants are broken if you have an appropriately
configured kernel.

>
>> here instead of the stack munging below?
>>
>>> +       subq    $SIZEOF_PTREGS, %r10
>>> +       SAVE_EXTRA_REGS base=r10
>>> +       movq    %r10, %rbx
>>> +       call    *%rax
>>> +       movq    %rbx, %r10
>>> +       RESTORE_EXTRA_REGS base=r10
>>> +       ret
>>> +1:
>>> +       jmp     *%rax
>>>  END(stub_ptregs_64)
>
> After some thought, that can be simplified.  It's only executed on the
> fast path, so pt_regs is at 8(%rsp).
>
>> Also, can we not get away with keying off rip or rsp instead of
>> ti->status?  That should be faster and less magical IMO.
>
> Checking if the return address is the instruction after the fast path
> dispatch would work.
>
> Simplified version:
> ENTRY(stub_ptregs_64)
>     cmpl $fast_path_return, (%rsp)

Does that instruction actually work the way you want it to?  (Does it
link?)  I think you might need to use leaq the way I did in my patch.

>     jne 1f
>     SAVE_EXTRA_REGS offset=8
>     call *%rax
>     RESTORE_EXTRA_REGS offset=8
>     ret
> 1:
>     jmp *%rax
> END(stub_ptregs_64)

This'll work, I think, but I still think I prefer keeping as much
complexity as possible in the slow path.  I could be convinced
otherwise, though -- this variant is reasonably clean.

--Andy
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ