[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4b29XMoBoN9_0BCtuV2dgasO=WUu0re91Refjx68Q9O9A@mail.gmail.com>
Date: Fri, 7 Mar 2025 10:20:17 +0100
From: Uros Bizjak <ubizjak@...il.com>
To: "H. Peter Anvin" <hpa@...or.com>
Cc: linux-kernel@...r.kernel.org,
tip-bot2 for Uros Bizjak <tip-bot2@...utronix.de>, linux-tip-commits@...r.kernel.org,
Ingo Molnar <mingo@...nel.org>, David Woodhouse <dwmw@...zon.co.uk>, Baoquan He <bhe@...hat.com>,
Vivek Goyal <vgoyal@...hat.com>, Dave Young <dyoung@...hat.com>, Ard Biesheuvel <ardb@...nel.org>,
x86@...nel.org
Subject: Re: [tip: x86/asm] x86/kexec: Merge x86_32 and x86_64 code using
macros from <asm/asm.h>
On Fri, Mar 7, 2025 at 4:00 AM H. Peter Anvin <hpa@...or.com> wrote:
>
> On March 6, 2025 1:33:43 PM PST, tip-bot2 for Uros Bizjak <tip-bot2@...utronix.de> wrote:
> >The following commit has been merged into the x86/asm branch of tip:
> >
> >Commit-ID: aa3942d4d12ef57f031faa2772fe410c24191e36
> >Gitweb: https://git.kernel.org/tip/aa3942d4d12ef57f031faa2772fe410c24191e36
> >Author: Uros Bizjak <ubizjak@...il.com>
> >AuthorDate: Thu, 06 Mar 2025 15:52:11 +01:00
> >Committer: Ingo Molnar <mingo@...nel.org>
> >CommitterDate: Thu, 06 Mar 2025 22:04:48 +01:00
> >
> >x86/kexec: Merge x86_32 and x86_64 code using macros from <asm/asm.h>
> >
> >Merge common x86_32 and x86_64 code in crash_setup_regs()
> >using macros from <asm/asm.h>.
> >
> >The compiled object files before and after the patch are unchanged.
> >
> >Signed-off-by: Uros Bizjak <ubizjak@...il.com>
> >Signed-off-by: Ingo Molnar <mingo@...nel.org>
> >Cc: David Woodhouse <dwmw@...zon.co.uk>
> >Cc: Baoquan He <bhe@...hat.com>
> >Cc: Vivek Goyal <vgoyal@...hat.com>
> >Cc: Dave Young <dyoung@...hat.com>
> >Cc: Ard Biesheuvel <ardb@...nel.org>
> >Cc: "H. Peter Anvin" <hpa@...or.com>
> >Link: https://lore.kernel.org/r/20250306145227.55819-1-ubizjak@gmail.com
> >---
> > arch/x86/include/asm/kexec.h | 58 +++++++++++++++--------------------
> > 1 file changed, 25 insertions(+), 33 deletions(-)
> >
> >diff --git a/arch/x86/include/asm/kexec.h b/arch/x86/include/asm/kexec.h
> >index 8ad1874..e3589d6 100644
> >--- a/arch/x86/include/asm/kexec.h
> >+++ b/arch/x86/include/asm/kexec.h
> >@@ -18,6 +18,7 @@
> > #include <linux/string.h>
> > #include <linux/kernel.h>
> >
> >+#include <asm/asm.h>
> > #include <asm/page.h>
> > #include <asm/ptrace.h>
> >
> >@@ -71,41 +72,32 @@ static inline void crash_setup_regs(struct pt_regs *newregs,
> > if (oldregs) {
> > memcpy(newregs, oldregs, sizeof(*newregs));
> > } else {
> >+ asm volatile("mov %%" _ASM_BX ",%0" : "=m"(newregs->bx));
> >+ asm volatile("mov %%" _ASM_CX ",%0" : "=m"(newregs->cx));
> >+ asm volatile("mov %%" _ASM_DX ",%0" : "=m"(newregs->dx));
> >+ asm volatile("mov %%" _ASM_SI ",%0" : "=m"(newregs->si));
> >+ asm volatile("mov %%" _ASM_DI ",%0" : "=m"(newregs->di));
> >+ asm volatile("mov %%" _ASM_BP ",%0" : "=m"(newregs->bp));
> >+ asm volatile("mov %%" _ASM_AX ",%0" : "=m"(newregs->ax));
> >+ asm volatile("mov %%" _ASM_SP ",%0" : "=m"(newregs->sp));
> >+#ifdef CONFIG_X86_64
> >+ asm volatile("mov %%r8,%0" : "=m"(newregs->r8));
> >+ asm volatile("mov %%r9,%0" : "=m"(newregs->r9));
> >+ asm volatile("mov %%r10,%0" : "=m"(newregs->r10));
> >+ asm volatile("mov %%r11,%0" : "=m"(newregs->r11));
> >+ asm volatile("mov %%r12,%0" : "=m"(newregs->r12));
> >+ asm volatile("mov %%r13,%0" : "=m"(newregs->r13));
> >+ asm volatile("mov %%r14,%0" : "=m"(newregs->r14));
> >+ asm volatile("mov %%r15,%0" : "=m"(newregs->r15));
> >+#endif
> >+ asm volatile("mov %%ss,%k0" : "=a"(newregs->ss));
> >+ asm volatile("mov %%cs,%k0" : "=a"(newregs->cs));
> > #ifdef CONFIG_X86_32
> >- asm volatile("movl %%ebx,%0" : "=m"(newregs->bx));
> >- asm volatile("movl %%ecx,%0" : "=m"(newregs->cx));
> >- asm volatile("movl %%edx,%0" : "=m"(newregs->dx));
> >- asm volatile("movl %%esi,%0" : "=m"(newregs->si));
> >- asm volatile("movl %%edi,%0" : "=m"(newregs->di));
> >- asm volatile("movl %%ebp,%0" : "=m"(newregs->bp));
> >- asm volatile("movl %%eax,%0" : "=m"(newregs->ax));
> >- asm volatile("movl %%esp,%0" : "=m"(newregs->sp));
> >- asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
> >- asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
> >- asm volatile("movl %%ds, %%eax;" :"=a"(newregs->ds));
> >- asm volatile("movl %%es, %%eax;" :"=a"(newregs->es));
> >- asm volatile("pushfl; popl %0" :"=m"(newregs->flags));
> >-#else
> >- asm volatile("movq %%rbx,%0" : "=m"(newregs->bx));
> >- asm volatile("movq %%rcx,%0" : "=m"(newregs->cx));
> >- asm volatile("movq %%rdx,%0" : "=m"(newregs->dx));
> >- asm volatile("movq %%rsi,%0" : "=m"(newregs->si));
> >- asm volatile("movq %%rdi,%0" : "=m"(newregs->di));
> >- asm volatile("movq %%rbp,%0" : "=m"(newregs->bp));
> >- asm volatile("movq %%rax,%0" : "=m"(newregs->ax));
> >- asm volatile("movq %%rsp,%0" : "=m"(newregs->sp));
> >- asm volatile("movq %%r8,%0" : "=m"(newregs->r8));
> >- asm volatile("movq %%r9,%0" : "=m"(newregs->r9));
> >- asm volatile("movq %%r10,%0" : "=m"(newregs->r10));
> >- asm volatile("movq %%r11,%0" : "=m"(newregs->r11));
> >- asm volatile("movq %%r12,%0" : "=m"(newregs->r12));
> >- asm volatile("movq %%r13,%0" : "=m"(newregs->r13));
> >- asm volatile("movq %%r14,%0" : "=m"(newregs->r14));
> >- asm volatile("movq %%r15,%0" : "=m"(newregs->r15));
> >- asm volatile("movl %%ss, %%eax;" :"=a"(newregs->ss));
> >- asm volatile("movl %%cs, %%eax;" :"=a"(newregs->cs));
> >- asm volatile("pushfq; popq %0" :"=m"(newregs->flags));
> >+ asm volatile("mov %%ds,%k0" : "=a"(newregs->ds));
> >+ asm volatile("mov %%es,%k0" : "=a"(newregs->es));
> > #endif
> >+ asm volatile("pushf\n\t"
> >+ "pop %0" : "=m"(newregs->flags));
> > newregs->ip = _THIS_IP_;
> > }
> > }
>
> Incidentally, doing this in C code is obviously completely broken, especially doing it in multiple statements. You have no idea what the compiler has messed with before you get there.
These are "asm volatile" statemets, so at least they won't be
scheduled in a different way. OTOH, please note that the patch is very
carefully written to not change code flow, usage of hardregs in the
inline asm is usually the sign of fragile code.
Uros.
Powered by blists - more mailing lists