lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y1ZnHkb7djZpANX5@hirez.programming.kicks-ass.net>
Date:   Mon, 24 Oct 2022 12:21:18 +0200
From:   Peter Zijlstra <peterz@...radead.org>
To:     Linus Torvalds <torvalds@...ux-foundation.org>
Cc:     x86@...nel.org, willy@...radead.org, akpm@...ux-foundation.org,
        linux-kernel@...r.kernel.org, linux-mm@...ck.org,
        aarcange@...hat.com, kirill.shutemov@...ux.intel.com,
        jroedel@...e.de, ubizjak@...il.com
Subject: Re: [PATCH 09/13] x86/mm/pae: Use WRITE_ONCE()

On Sat, Oct 22, 2022 at 10:42:52AM -0700, Linus Torvalds wrote:
> On Sat, Oct 22, 2022 at 4:48 AM Peter Zijlstra <peterz@...radead.org> wrote:
> >
> >  static inline void native_set_pte(pte_t *ptep, pte_t pte)
> >  {
> > -       ptep->pte_high = pte.pte_high;
> > +       WRITE_ONCE(ptep->pte_high, pte.pte_high);
> >         smp_wmb();
> > -       ptep->pte_low = pte.pte_low;
> > +       WRITE_ONCE(ptep->pte_low, pte.pte_low);
> 
> With this, the smp_wmb() should just go away too. It was really only
> ever there as a compiler barrier.

Right, however I find it easier to reason about this with the smp_wmb()
there, esp. since the counterpart is in generic code and (must) carries
those smp_rmb()s.

Still, I can take them out if you prefer.

> Or do we already have a comment elsewhere about why the ordering is
> important (and how *clearing* clears the low word with the present bit
> first, but setting a *new* entry sets the high word first so that the
> 64-bit entry is complete when the present bit is set?)

There's a comment in include/linux/pgtable.h near ptep_get_lockless().

Now; I've been on the fence about making those READ_ONCE(), I think
KCSAN would want that, but I think the code is correct without them,
even if the loads get torn, we rely on the equality of the first and
third load and the barriers then guarantee the second load is coherent.

OTOH, if the stores (this patch) go funny and get torn bad things can
happen, imagine it writing the byte with the present bit in first and
then the other bytes (because the compile is an evil bastard and wants a
giggle).


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ