lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20211206201122.GE641268@paulmck-ThinkPad-P17-Gen-1>
Date:   Mon, 6 Dec 2021 12:11:22 -0800
From:   "Paul E. McKenney" <paulmck@...nel.org>
To:     Florian Weimer <fweimer@...hat.com>
Cc:     Mathieu Desnoyers <mathieu.desnoyers@...icios.com>,
        Boqun Feng <boqun.feng@...il.com>,
        Peter Zijlstra <peterz@...radead.org>,
        libc-alpha <libc-alpha@...rceware.org>,
        linux-kernel <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 2/5] nptl: Add rseq registration

On Mon, Dec 06, 2021 at 08:03:26PM +0100, Florian Weimer wrote:
> * Mathieu Desnoyers:
> 
> > [ Adding other kernel rseq maintainers in CC. ]
> >
> > ----- On Dec 6, 2021, at 12:14 PM, Florian Weimer fweimer@...hat.com wrote:
> >
> >> * Mathieu Desnoyers:
> >> 
> >>> ----- On Dec 6, 2021, at 8:46 AM, Florian Weimer fweimer@...hat.com wrote:
> >>> [...]
> >>>> @@ -406,6 +407,9 @@ struct pthread
> >>>>   /* Used on strsignal.  */
> >>>>   struct tls_internal_t tls_state;
> >>>> 
> >>>> +  /* rseq area registered with the kernel.  */
> >>>> +  struct rseq rseq_area;
> >>>
> >>> The rseq UAPI requires that the fields within the rseq_area
> >>> are read-written with single-copy atomicity semantics.
> >>>
> >>> So either we define a "volatile struct rseq" here, or we'll need
> >>> to wrap all accesses with the proper volatile casts, or use the
> >>> relaxed_mo atomic accesses.
> >> 
> >> Under the C memory model, neither volatile nor relaxed MO result in
> >> single-copy atomicity semantics.  So I'm not sure what to make of this.
> >> Surely switching to inline assembly on all targets is over the top.
> >> 
> >> I think we can rely on a plain read doing the right thing for us.
> >
> > AFAIU, the plain read does not prevent the compiler from re-loading the
> > value in case of high register pressure.
> >
> > Accesses to rseq fields such as cpu_id need to be done as if those were
> > concurrently modified by a signal handler nesting on top of the user-space
> > code, with the particular twist that blocking signals has no effect on
> > concurrent updates.
> >
> > I do not think we need to do the load in assembly. I was under the impression
> > that both volatile load and relaxed MO result in single-copy atomicity
> > semantics for an aligned pointer. Perhaps Paul, Peter, Boqun have something
> > to add here ?
> 
> The C memory model is broken and does not prevent out-of-thin-air
> values.  As far as I know, this breaks single-copy atomicity.  In
> practice, compilers will not exercise the latitude offered by the memory
> model.  volatile does not ensure absence of data races.

Within the confines of the standard, agreed, use of the volatile keyword
does not explicitly prevent data races.

However, volatile accesses are (informally) defined to suffice for
device-driver memory accesses that communicate with devices, whether via
MMIO or DMA-style shared memory.  The device-driver firmware is often
written in C or C++.  So doesn't this informal device-driver guarantee
need to also do what is needed for userspace code that is communicating
with kernel code?  If not, why not?

> Using atomics or volatile would require us to materialize the thread
> pointer, given the current internal interfaces we have, and I don't want
> to do this because this is supposed to be performance-critical code.
> The compiler barrier inherent to the function call will have to be
> enough.  I can add a comment to this effect:
> 
>   /* This load has single-copy atomicity semantics (as required for
>      rseq) because the function call implies a compiler barrier.  */

Agreed on the need to be very careful to avoid degrading performance on
fast paths!

							Thanx, Paul

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ