lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAHk-=whEc2HR3En32uyAufPM3tEh8J4+dot6JyGW=Eg5SEhx7A@mail.gmail.com>
Date:   Wed, 18 Oct 2023 15:40:05 -0700
From:   Linus Torvalds <torvalds@...ux-foundation.org>
To:     Uros Bizjak <ubizjak@...il.com>, peterz@...radead.org
Cc:     Nadav Amit <namit@...are.com>,
        "the arch/x86 maintainers" <x86@...nel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Andy Lutomirski <luto@...nel.org>,
        Brian Gerst <brgerst@...il.com>,
        Denys Vlasenko <dvlasenk@...hat.com>,
        "H . Peter Anvin" <hpa@...or.com>,
        Thomas Gleixner <tglx@...utronix.de>,
        Josh Poimboeuf <jpoimboe@...hat.com>,
        Nick Desaulniers <ndesaulniers@...gle.com>
Subject: Re: [PATCH v2 -tip] x86/percpu: Use C for arch_raw_cpu_ptr()

On Wed, 18 Oct 2023 at 14:40, Uros Bizjak <ubizjak@...il.com> wrote:
>
> The ones in "raw" form are not IRQ safe and these are implemented
> without volatile qualifier.

You are misreading it.

Both *are* irq safe - on x86.

The difference between "this_cpu_xyz()" and "raw_cpu_xyz()" is that on
*other* architectures, "raw_cpu_xyz():" can be a lot more efficient,
because other architectures may need to do extra work to make the
"this" version be atomic on a particular CPU.

See for example __count_vm_event() vs count_vm_event().

In fact, that particular use isn't even in an interrupt-safe context,
that's an example of literally "I'd rather be fast that correct for
certain statistics that aren't all that important".

They two versions generate the same code on x86, but on other
architectures, __count_vm_event() can *much* simpler and faster
because it doesn't disable interrupts or do other special things.

But on x86, the whole "interrupt safety" is a complete red herring.
Both of them generate the exact same instruction.

On x86, the "volatile" is actually for a completely different reason:
to avoid too much CSE by the compiler.

See  commit b59167ac7baf ("x86/percpu: Fix this_cpu_read()").

In fact, that commit went overboard, and just added "volatile" to
*every* percpu read.

So then people complained about *that*, and PeterZ did commit
0b9ccc0a9b14 ("x86/percpu: Differentiate this_cpu_{}() and
__this_cpu_{}()"), which basically made that "qual or not" be a macro
choice.

And in the process, it now got added to all the RMW ops, that didn't
actually need it or want it in the first place, since they won't be
CSE'd, since they depend on the input.

So that commit basically generalized the whole thing entirely
pointlessly, and caused your current confusion.

End result: we should remove 'volatile' from the RMW ops. It doesn't
do anything on x86. All it does is make us have two subtly different
versions that we don't care about the difference.

End result two: we should make it clear that "this_cpu_read()" vs
"raw_cpu_read()" are *NOT* about interrupts. Even on architectures
where the RMW ops need to have irq protection (so that they are atomic
wrt interrupts also modifying the value), the *READ* operation
obviously has no such issue.

For the raw_cpu_read() vs this_cpu_read() case, the only issue is
whether you can CSE the result.

And in 99% of all cases, you can - and want to - CSE it. But as that
commit b59167ac7baf shows, sometimes you cannot.

Side note: the code that caused that problem is this:

  __always_inline void __cyc2ns_read(struct cyc2ns_data *data)
  {
        int seq, idx;

        do {
                seq = this_cpu_read(cyc2ns.seq.seqcount.sequence);
                ...
        } while (unlikely(seq != this_cpu_read(cyc2ns.seq.seqcount.sequence)));
  }

where the issue is that the this_cpu_read() of that sequence number
needs to be ordered.

Honestly, that code is just buggy and bad.  We should never have
"fixed" it by changing the semantics of this_cpu_read() in the first
place.

The problem is that it re-implements its own locking model, and as so
often happens when people do that, they do it completely wrongly.

Look at the *REAL* sequence counter code in <linux/seqlock.h>. Notice
how in raw_read_seqcount_begin() we have

        unsigned _seq = __read_seqcount_begin(s);
        smp_rmb();

because it actually does the proper barriers. Notice how the garbage
code in __cyc2ns_read() doesn't have them - and how it was buggy as a
result.

(Also notice how this all predates our "we should use load_acquire()
instead of smb_rmb()", but whatever).

IOW, all the "volatiles" in the x86 <asm/percpu.h> file are LITERAL
GARBAGE and should not exist, and are due to a historical mistake.

                   Linus

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ