[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAFULd4ZVvRvssyj--un6vrLU5M816ysEkc4xpXnGSN=hyhTTFQ@mail.gmail.com>
Date: Thu, 12 Oct 2023 19:54:26 +0200
From: Uros Bizjak <ubizjak@...il.com>
To: Brian Gerst <brgerst@...il.com>
Cc: x86@...nel.org, xen-devel@...ts.xenproject.org,
linux-kernel@...r.kernel.org, Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>,
"H. Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>
Subject: Re: [PATCH 1/4] x86/percpu: Use explicit segment registers in lib/cmpxchg{8,16}b_emu.S
On Thu, Oct 12, 2023 at 7:45 PM Brian Gerst <brgerst@...il.com> wrote:
>
> On Thu, Oct 12, 2023 at 12:13 PM Uros Bizjak <ubizjak@...il.com> wrote:
> >
> > PER_CPU_VAR macro is intended to be applied to a symbol, it is not
> > intended to be used as a selector between %fs and %gs segment
> > registers for general operands.
> >
> > The address to these emulation functions is passed in a register, so
> > use explicit segment registers to access percpu variable instead.
> >
> > Also add a missing function comment to this_cpu_cmpxchg8b_emu.
> >
> > No functional changes intended.
> >
> > Cc: Thomas Gleixner <tglx@...utronix.de>
> > Cc: Ingo Molnar <mingo@...hat.com>
> > Cc: Borislav Petkov <bp@...en8.de>
> > Cc: Dave Hansen <dave.hansen@...ux.intel.com>
> > Cc: "H. Peter Anvin" <hpa@...or.com>
> > Cc: Peter Zijlstra <peterz@...radead.org>
> > Signed-off-by: Uros Bizjak <ubizjak@...il.com>
> > ---
> > arch/x86/lib/cmpxchg16b_emu.S | 12 ++++++------
> > arch/x86/lib/cmpxchg8b_emu.S | 30 +++++++++++++++++++++---------
> > 2 files changed, 27 insertions(+), 15 deletions(-)
> >
> > diff --git a/arch/x86/lib/cmpxchg16b_emu.S b/arch/x86/lib/cmpxchg16b_emu.S
> > index 6962df315793..2bd8b89bce75 100644
> > --- a/arch/x86/lib/cmpxchg16b_emu.S
> > +++ b/arch/x86/lib/cmpxchg16b_emu.S
> > @@ -23,14 +23,14 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
> > cli
> >
> > /* if (*ptr == old) */
> > - cmpq PER_CPU_VAR(0(%rsi)), %rax
> > + cmpq %gs:(%rsi), %rax
> > jne .Lnot_same
> > - cmpq PER_CPU_VAR(8(%rsi)), %rdx
> > + cmpq %gs:8(%rsi), %rdx
> > jne .Lnot_same
> >
> > /* *ptr = new */
> > - movq %rbx, PER_CPU_VAR(0(%rsi))
> > - movq %rcx, PER_CPU_VAR(8(%rsi))
> > + movq %rbx, %gs:(%rsi)
> > + movq %rcx, %gs:8(%rsi)
> >
> > /* set ZF in EFLAGS to indicate success */
> > orl $X86_EFLAGS_ZF, (%rsp)
> > @@ -42,8 +42,8 @@ SYM_FUNC_START(this_cpu_cmpxchg16b_emu)
> > /* *ptr != old */
> >
> > /* old = *ptr */
> > - movq PER_CPU_VAR(0(%rsi)), %rax
> > - movq PER_CPU_VAR(8(%rsi)), %rdx
> > + movq %gs:(%rsi), %rax
> > + movq %gs:8(%rsi), %rdx
> >
> > /* clear ZF in EFLAGS to indicate failure */
> > andl $(~X86_EFLAGS_ZF), (%rsp)
> > diff --git a/arch/x86/lib/cmpxchg8b_emu.S b/arch/x86/lib/cmpxchg8b_emu.S
> > index 49805257b125..b7d68d5e2d31 100644
> > --- a/arch/x86/lib/cmpxchg8b_emu.S
> > +++ b/arch/x86/lib/cmpxchg8b_emu.S
> > @@ -24,12 +24,12 @@ SYM_FUNC_START(cmpxchg8b_emu)
> > pushfl
> > cli
> >
> > - cmpl 0(%esi), %eax
> > + cmpl (%esi), %eax
> > jne .Lnot_same
> > cmpl 4(%esi), %edx
> > jne .Lnot_same
> >
> > - movl %ebx, 0(%esi)
> > + movl %ebx, (%esi)
> > movl %ecx, 4(%esi)
> >
> > orl $X86_EFLAGS_ZF, (%esp)
> > @@ -38,7 +38,7 @@ SYM_FUNC_START(cmpxchg8b_emu)
> > RET
> >
> > .Lnot_same:
> > - movl 0(%esi), %eax
> > + movl (%esi), %eax
> > movl 4(%esi), %edx
> >
> > andl $(~X86_EFLAGS_ZF), (%esp)
> > @@ -53,18 +53,30 @@ EXPORT_SYMBOL(cmpxchg8b_emu)
> >
> > #ifndef CONFIG_UML
> >
> > +/*
> > + * Emulate 'cmpxchg8b %fs:(%esi)'
> > + *
> > + * Inputs:
> > + * %esi : memory location to compare
> > + * %eax : low 32 bits of old value
> > + * %edx : high 32 bits of old value
> > + * %ebx : low 32 bits of new value
> > + * %ecx : high 32 bits of new value
> > + *
> > + * Notably this is not LOCK prefixed and is not safe against NMIs
> > + */
> > SYM_FUNC_START(this_cpu_cmpxchg8b_emu)
> >
> > pushfl
> > cli
> >
> > - cmpl PER_CPU_VAR(0(%esi)), %eax
> > + cmpl %fs:(%esi), %eax
> > jne .Lnot_same2
> > - cmpl PER_CPU_VAR(4(%esi)), %edx
> > + cmpl %fs:4(%esi), %edx
> > jne .Lnot_same2
> >
> > - movl %ebx, PER_CPU_VAR(0(%esi))
> > - movl %ecx, PER_CPU_VAR(4(%esi))
> > + movl %ebx, %fs:(%esi)
> > + movl %ecx, %fs:4(%esi)
> >
> > orl $X86_EFLAGS_ZF, (%esp)
> >
> > @@ -72,8 +84,8 @@ SYM_FUNC_START(this_cpu_cmpxchg8b_emu)
> > RET
> >
> > .Lnot_same2:
> > - movl PER_CPU_VAR(0(%esi)), %eax
> > - movl PER_CPU_VAR(4(%esi)), %edx
> > + movl %fs:(%esi), %eax
> > + movl %fs:4(%esi), %edx
> >
> > andl $(~X86_EFLAGS_ZF), (%esp)
> >
> > --
> > 2.41.0
> >
>
> This will break on !SMP builds, where per-cpu variables are just
> regular data and not accessed with a segment prefix.
Ugh, indeed. Let me rethink this a bit.
Thanks,
Uros.
Powered by blists - more mailing lists