[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <90084B6D-AF09-467D-A550-CC557F709847@vmware.com>
Date: Wed, 18 Oct 2023 20:17:14 +0000
From: Nadav Amit <namit@...are.com>
To: Uros Bizjak <ubizjak@...il.com>
CC: Linus Torvalds <torvalds@...ux-foundation.org>,
the arch/x86 maintainers <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...nel.org>,
Brian Gerst <brgerst@...il.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Josh Poimboeuf <jpoimboe@...hat.com>,
Nick Desaulniers <ndesaulniers@...gle.com>
Subject: Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()
> On Oct 18, 2023, at 10:33 PM, Uros Bizjak <ubizjak@...il.com> wrote:
>
> This pach works for me:
>
> --cut here--
> diff --git a/arch/x86/include/asm/smp.h b/arch/x86/include/asm/smp.h
> index 4fab2ed454f3..6eda4748bf64 100644
> --- a/arch/x86/include/asm/smp.h
> +++ b/arch/x86/include/asm/smp.h
> @@ -141,8 +141,7 @@ __visible void
> smp_call_function_single_interrupt(struct pt_regs *r);
> * This function is needed by all SMP systems. It must _always_ be valid
> * from the initial startup.
> */
> -#define raw_smp_processor_id() this_cpu_read(pcpu_hot.cpu_number)
> -#define __smp_processor_id() __this_cpu_read(pcpu_hot.cpu_number)
> +#define raw_smp_processor_id() raw_cpu_read(pcpu_hot.cpu_number)
I don’t think that’s correct. IIUC, although changing __smp_processor_id()
to read pcpu_hot.cpu_number through raw_cpu_read, raw_smp_processor_id()
should not be changed in this manner.
raw_smp_processor_id() does not assume that preemption or at least
migration of the task to another core is possible. So “volatile” keyword
and inline assembly are used to ensure the ordering and that the read
is read without some compiler optimization that reads the value multiple
times or tears the reads.
In contrast raw_cpu_read() does not use the volatile keyword, so it
does not provide the same guarantees.
Powered by blists - more mailing lists