[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrWJq67VtgJmHeV+4AGqTeYnX9KvjJMMSb+LLyA7yUZ7Qw@mail.gmail.com>
Date: Wed, 16 Jul 2014 13:53:36 -0700
From: Andy Lutomirski <luto@...capital.net>
To: Alexei Starovoitov <ast@...mgrid.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Kees Cook <keescook@...omium.org>,
Will Drewry <wad@...omium.org>,
James Morris <james.l.morris@...cle.com>,
Oleg Nesterov <oleg@...hat.com>, X86 ML <x86@...nel.org>,
"linux-arm-kernel@...ts.infradead.org"
<linux-arm-kernel@...ts.infradead.org>,
Linux MIPS Mailing List <linux-mips@...ux-mips.org>,
linux-arch <linux-arch@...r.kernel.org>,
LSM List <linux-security-module@...r.kernel.org>
Subject: Re: [PATCH 6/7] x86_64,entry: Treat regs->ax the same in fastpath and
slowpath syscalls
On Wed, Jul 16, 2014 at 1:08 PM, Alexei Starovoitov <ast@...mgrid.com> wrote:
> On Tue, Jul 15, 2014 at 12:32 PM, Andy Lutomirski <luto@...capital.net> wrote:
>> For slowpath syscalls, we initialize regs->ax to -ENOSYS and stick
>> the syscall number into regs->orig_ax prior to any possible tracing
>> and syscall execution. This is user-visible ABI used by ptrace
>> syscall emulation and seccomp.
>>
>> For fastpath syscalls, there's no good reason not to do the same
>> thing. It's even slightly simpler than what we're currently doing.
>> It probably has no measureable performance impact. It should have
>> no user-visible effect.
>>
>> The purpose of this patch is to prepare for seccomp-based syscall
>> emulation in the fast path. This change is just subtle enough that
>> I'm keeping it separate.
>>
>> Signed-off-by: Andy Lutomirski <luto@...capital.net>
>> ---
>> arch/x86/include/asm/calling.h | 6 +++++-
>> arch/x86/kernel/entry_64.S | 11 +++--------
>> 2 files changed, 8 insertions(+), 9 deletions(-)
>>
>> diff --git a/arch/x86/include/asm/calling.h b/arch/x86/include/asm/calling.h
>> index cb4c73b..76659b6 100644
>> --- a/arch/x86/include/asm/calling.h
>> +++ b/arch/x86/include/asm/calling.h
>> @@ -85,7 +85,7 @@ For 32-bit we have the following conventions - kernel is built with
>> #define ARGOFFSET R11
>> #define SWFRAME ORIG_RAX
>>
>> - .macro SAVE_ARGS addskip=0, save_rcx=1, save_r891011=1
>> + .macro SAVE_ARGS addskip=0, save_rcx=1, save_r891011=1, rax_enosys=0
>> subq $9*8+\addskip, %rsp
>> CFI_ADJUST_CFA_OFFSET 9*8+\addskip
>> movq_cfi rdi, 8*8
>> @@ -96,7 +96,11 @@ For 32-bit we have the following conventions - kernel is built with
>> movq_cfi rcx, 5*8
>> .endif
>>
>> + .if \rax_enosys
>> + movq $-ENOSYS, 4*8(%rsp)
>> + .else
>> movq_cfi rax, 4*8
>> + .endif
>>
>> .if \save_r891011
>> movq_cfi r8, 3*8
>> diff --git a/arch/x86/kernel/entry_64.S b/arch/x86/kernel/entry_64.S
>> index b25ca96..432c190 100644
>> --- a/arch/x86/kernel/entry_64.S
>> +++ b/arch/x86/kernel/entry_64.S
>> @@ -405,8 +405,8 @@ GLOBAL(system_call_after_swapgs)
>> * and short:
>> */
>> ENABLE_INTERRUPTS(CLBR_NONE)
>> - SAVE_ARGS 8,0
>> - movq %rax,ORIG_RAX-ARGOFFSET(%rsp)
>> + SAVE_ARGS 8, 0, rax_enosys=1
>> + movq_cfi rax,(ORIG_RAX-ARGOFFSET)
>
> I think changing store rax to macro is unnecessary,
> since it breaks common style of asm with the next line:
I think it's necessary to maintain CFI correctness. rax is no longer
saved in "ax", so "orig_ax" is now the correct slot.
>> movq %rcx,RIP-ARGOFFSET(%rsp)
This, in contrast, is the saved rip, not the saved rcx. rcx is lost
when syscall happens.
> Also it made the diff harder to grasp.
>
> The change from the next patch 7/7:
>
>> - ja int_ret_from_sys_call /* RAX(%rsp) set to -ENOSYS above */
>> + ja int_ret_from_sys_call /* RAX(%rsp) is already set */
>
> probably belongs in this 6/7 patch as well.
Agreed. Will do for v3.
--Andy
>
> The rest look good to me. I think it's a big improvement in readability
> comparing to v1.
--
Andy Lutomirski
AMA Capital Management, LLC
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists