[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=wjF4gzCZKh-zN-sY0WpX1kCo+s9gYE9sOcSv0QieH1dwQ@mail.gmail.com>
Date: Wed, 11 Oct 2023 12:37:19 -0700
From: Linus Torvalds <torvalds@...ux-foundation.org>
To: Nadav Amit <namit@...are.com>
Cc: Uros Bizjak <ubizjak@...il.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Andy Lutomirski <luto@...nel.org>,
Brian Gerst <brgerst@...il.com>,
Denys Vlasenko <dvlasenk@...hat.com>,
"H . Peter Anvin" <hpa@...or.com>,
Peter Zijlstra <peterz@...radead.org>,
Thomas Gleixner <tglx@...utronix.de>,
Josh Poimboeuf <jpoimboe@...hat.com>
Subject: Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()
On Wed, 11 Oct 2023 at 00:42, Nadav Amit <namit@...are.com> wrote:
>
> You are correct. Having said that, for “current" we may be able to do something
> better, as regardless to preemption “current" remains the same, and
> this_cpu_read_stable() does miss some opportunities to avoid reloading the
> value from memory.
It would be lovely to generate even better code, but that
this_cpu_read_stable() thing is the best we've come up with. It
intentionally has *no* memory inputs or anything else that might make
gcc think "I need to re-do this".
For example, instead of using "m" as a memory input, it very
intentionally uses "p", to make it clear that that it just uses the
_pointer_, not the memory location itself.
That's obviously a lie - it actually does access memory - but it's a
lie exactly because of the reason you mention: even when the memory
location changes due to preemption (or explicit scheduling), it always
changes back to the the value we care about.
So gcc _should_ be able to CSE it in all situations, but it's entirely
possible that gcc then decides to re-generate the value for whatever
reason. It's a cheap op, so it's ok to regen, of course, but the
intent is basically to let the compiler re-use the value as much as
possible.
But it *is* probably better to regenerate the value than it would be
to spill and re-load it, and from the cases I've seen, this all tends
to work fairly well.
Linus
Powered by blists - more mailing lists