[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJcbSZGedSfZZ5rveH2+_3q7pvmMyDGLxmZU41Nno=ZBX8kN=w@mail.gmail.com>
Date: Mon, 5 Aug 2019 10:50:30 -0700
From: Thomas Garnier <thgarnie@...omium.org>
To: Borislav Petkov <bp@...en8.de>
Cc: Kernel Hardening <kernel-hardening@...ts.openwall.com>,
Kristen Carlson Accardi <kristen@...ux.intel.com>,
Kees Cook <keescook@...omium.org>,
Andy Lutomirski <luto@...nel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH v9 04/11] x86/entry/64: Adapt assembly for PIE support
On Mon, Aug 5, 2019 at 10:28 AM Borislav Petkov <bp@...en8.de> wrote:
>
> On Tue, Jul 30, 2019 at 12:12:48PM -0700, Thomas Garnier wrote:
> > Change the assembly code to use only relative references of symbols for the
> > kernel to be PIE compatible.
> >
> > Position Independent Executable (PIE) support will allow to extend the
> > KASLR randomization range below 0xffffffff80000000.
> >
> > Signed-off-by: Thomas Garnier <thgarnie@...omium.org>
> > Reviewed-by: Kees Cook <keescook@...omium.org>
> > ---
> > arch/x86/entry/entry_64.S | 16 +++++++++++-----
> > 1 file changed, 11 insertions(+), 5 deletions(-)
> >
> > diff --git a/arch/x86/entry/entry_64.S b/arch/x86/entry/entry_64.S
> > index 3f5a978a02a7..4b588a902009 100644
> > --- a/arch/x86/entry/entry_64.S
> > +++ b/arch/x86/entry/entry_64.S
> > @@ -1317,7 +1317,8 @@ ENTRY(error_entry)
> > movl %ecx, %eax /* zero extend */
> > cmpq %rax, RIP+8(%rsp)
> > je .Lbstep_iret
> > - cmpq $.Lgs_change, RIP+8(%rsp)
> > + leaq .Lgs_change(%rip), %rcx
> > + cmpq %rcx, RIP+8(%rsp)
> > jne .Lerror_entry_done
> >
> > /*
> > @@ -1514,10 +1515,10 @@ ENTRY(nmi)
> > * resume the outer NMI.
> > */
> >
> > - movq $repeat_nmi, %rdx
> > + leaq repeat_nmi(%rip), %rdx
> > cmpq 8(%rsp), %rdx
> > ja 1f
> > - movq $end_repeat_nmi, %rdx
> > + leaq end_repeat_nmi(%rip), %rdx
> > cmpq 8(%rsp), %rdx
> > ja nested_nmi_out
> > 1:
> > @@ -1571,7 +1572,8 @@ nested_nmi:
> > pushq %rdx
> > pushfq
> > pushq $__KERNEL_CS
> > - pushq $repeat_nmi
> > + leaq repeat_nmi(%rip), %rdx
> > + pushq %rdx
> >
> > /* Put stack back */
> > addq $(6*8), %rsp
> > @@ -1610,7 +1612,11 @@ first_nmi:
> > addq $8, (%rsp) /* Fix up RSP */
> > pushfq /* RFLAGS */
> > pushq $__KERNEL_CS /* CS */
> > - pushq $1f /* RIP */
> > + pushq $0 /* Future return address */
> > + pushq %rax /* Save RAX */
> > + leaq 1f(%rip), %rax /* RIP */
> > + movq %rax, 8(%rsp) /* Put 1f on return address */
> > + popq %rax /* Restore RAX */
>
> Can't you just use a callee-clobbered reg here instead of preserving
> %rax?
I saw that %rdx was used for temporary usage and restored before the
end so I assumed that it was not an option.
>
> --
> Regards/Gruss,
> Boris.
>
> Good mailing practices for 400: avoid top-posting and trim the reply.
Powered by blists - more mailing lists