lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <2236FBA76BA1254E88B949DDB74E612B802BF69F@IRSMSX102.ger.corp.intel.com>
Date:   Fri, 17 Nov 2017 08:17:56 +0000
From:   "Reshetova, Elena" <elena.reshetova@...el.com>
To:     Kees Cook <keescook@...omium.org>
CC:     Peter Zijlstra <peterz@...radead.org>,
        LKML <linux-kernel@...r.kernel.org>,
        Greg KH <gregkh@...uxfoundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Ingo Molnar <mingo@...hat.com>,
        Hans Liljestrand <ishkamiel@...il.com>,
        "Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
        Alan Stern <stern@...land.harvard.edu>,
        Jonathan Corbet <corbet@....net>,
        "linux-doc@...r.kernel.org" <linux-doc@...r.kernel.org>
Subject: RE: [PATCH] refcount_t: documentation for memory ordering
 differences

Hi Kees, 

Thank you for the proof reading. I will fix the typos/language, but
see the comments on bigger things inside. 

> On Tue, Nov 14, 2017 at 11:55 PM, Elena Reshetova
> <elena.reshetova@...el.com> wrote:
> > Some functions from refcount_t API provide different
> > memory ordering guarantees that their atomic counterparts.
> > This adds a document outlining these differences.
> 
> Thanks for writing this up! One bike-shedding thing I'll bring up
> before anyone else does is: please format this in ReST and link to it
> from somewhere (likely developer documentation) in the Documentation/
> index.rst file somewhere.
> 
> Perhaps in Documentation/core-api/index.rst ?

Sure, I can do it. 
Peter do you have any objections?

> 
> Lots of notes here:
> https://www.kernel.org/doc/html/latest/doc-guide/sphinx.html#writing-
> documentation
> 
> > Signed-off-by: Elena Reshetova <elena.reshetova@...el.com>
> > ---
> >  Documentation/refcount-vs-atomic.txt | 124
> +++++++++++++++++++++++++++++++++++
> >  1 file changed, 124 insertions(+)
> >  create mode 100644 Documentation/refcount-vs-atomic.txt
> >
> > diff --git a/Documentation/refcount-vs-atomic.txt b/Documentation/refcount-vs-
> atomic.txt
> > new file mode 100644
> > index 0000000..e703039
> > --- /dev/null
> > +++ b/Documentation/refcount-vs-atomic.txt
> > @@ -0,0 +1,124 @@
> > +==================================
> > +refcount_t API compare to atomic_t
> 
> "compared"
> 
> > +==================================
> > +
> > +The goal of refcount_t API is to provide a minimal API for implementing
> > +object's reference counters. While a generic architecture-independent
> 
> "an object's"
> 
> > +implementation from lib/refcount.c uses atomic operations underneath,
> > +there are a number of differences between some of the refcount_*() and
> > +atomic_*() functions with regards to the memory ordering guarantees.
> > +
> > +This document outlines the differences and provides respective examples
> > +in order to help maintainers validate their code against the change in
> > +these memory ordering guarantees.
> > +
> > +memory-barriers.txt and atomic_t.txt provide more background to the
> > +memory ordering in general and for atomic operations specifically.
> > +
> > +Notation
> 
> Should this section be called "Types of memory ordering"?

Well, these are only some types of ordering and explained mostly around
refcount_t vs. atomic_t, so it doesn't cover everything...

> 
> > +========
> > +
> > +An absence of memory ordering guarantees (i.e. fully unordered)
> > +in case of atomics & refcounters only provides atomicity and
> 
> I can't parse this. "In an absense ... atomics & refcounts only provide ... "?
> 
> > +program order (po) relation (on the same CPU). It guarantees that
> > +each atomic_*() and refcount_*() operation is atomic and instructions
> > +are executed in program order on a single CPU.
> > +Implemented using READ_ONCE()/WRITE_ONCE() and
> > +compare-and-swap primitives.
> 
> For here an later, maybe "This is implemented ..."
> 
> > +
> > +A strong (full) memory ordering guarantees that all prior loads and
> > +stores (all po-earlier instructions) on the same CPU are completed
> > +before any po-later instruction is executed on the same CPU.
> > +It also guarantees that all po-earlier stores on the same CPU
> > +and all propagated stores from other CPUs must propagate to all
> > +other CPUs before any po-later instruction is executed on the original
> > +CPU (A-cumulative property). Implemented using smp_mb().
> > +
> > +A RELEASE memory ordering guarantees that all prior loads and
> > +stores (all po-earlier instructions) on the same CPU are completed
> > +before the operation. It also guarantees that all po-earlier
> > +stores on the same CPU and all propagated stores from other CPUs
> > +must propagate to all other CPUs before the release operation
> > +(A-cumulative property). Implemented using smp_store_release().
> > +
> > +A control dependency (on success) for refcounters guarantees that
> > +if a reference for an object was successfully obtained (reference
> > +counter increment or addition happened, function returned true),
> > +then further stores are ordered against this operation.
> > +Control dependency on stores are not implemented using any explicit
> > +barriers, but rely on CPU not to speculate on stores. This is only
> > +a single CPU relation and provides no guarantees for other CPUs.
> > +
> > +
> > +Comparison of functions
> > +==========================
> > +
> > +case 1) - non-RMW ops
> 
> Should this be spelled out "Read/Modify/Write"?

Sure.

> 
> > +---------------------
> > +
> > +Function changes:
> > +                atomic_set() --> refcount_set()
> > +                atomic_read() --> refcount_read()
> > +
> > +Memory ordering guarantee changes:
> > +                fully unordered --> fully unordered
> 
> Maybe say: "none (both fully unordered)"

Ok

> 
> > +case 2) - increment-based ops that return no value
> > +--------------------------------------------------
> > +
> > +Function changes:
> > +                atomic_inc() --> refcount_inc()
> > +                atomic_add() --> refcount_add()
> > +
> > +Memory ordering guarantee changes:
> > +                fully unordered --> fully unordered
> 
> Same.
> 
> > +case 3) - decrement-based RMW ops that return no value
> > +------------------------------------------------------
> > +Function changes:
> > +                atomic_dec() --> refcount_dec()
> > +
> > +Memory ordering guarantee changes:
> > +                fully unordered --> RELEASE ordering
> 
> Should the sections where there is a change include an example of how
> this might matter to a developer?

I tried giving examples in the previous version of this doc, but I found
them to be more confusing than explaining anything, so I left them out
in this new version. What would be really
great here is to find a real example from tree where such a difference would
matter for a refcounter, but I wasn't able to find any cases like this. 

For an artificial example, I am really struggling to define a meaningful one, 
does anyone have any ideas? 

Best Regards,
Elena.

> 
> > +
> > +
> > +case 4) - increment-based RMW ops that return a value
> > +-----------------------------------------------------
> > +
> > +Function changes:
> > +                atomic_inc_not_zero() --> refcount_inc_not_zero()
> > +                no atomic counterpart --> refcount_add_not_zero()
> > +
> > +Memory ordering guarantees changes:
> > +                fully ordered --> control dependency on success for stores
> > +
> > +*Note*: we really assume here that necessary ordering is provided as a result
> > +of obtaining pointer to the object!
> 
> Same.
> 
> > +
> > +
> > +case 5) - decrement-based RMW ops that return a value
> > +-----------------------------------------------------
> > +
> > +Function changes:
> > +                atomic_dec_and_test() --> refcount_dec_and_test()
> > +                atomic_sub_and_test() --> refcount_sub_and_test()
> > +                no atomic counterpart --> refcount_dec_if_one()
> > +                atomic_add_unless(&var, -1, 1) --> refcount_dec_not_one(&var)
> > +
> > +Memory ordering guarantees changes:
> > +                fully ordered --> RELEASE ordering + control dependency
> > +
> > +Note: atomic_add_unless() only provides full order on success.
> 
> Same.
> 
> > +
> > +
> > +case 6) - lock-based RMW
> > +------------------------
> > +
> > +Function changes:
> > +
> > +                atomic_dec_and_lock() --> refcount_dec_and_lock()
> > +                atomic_dec_and_mutex_lock() --> refcount_dec_and_mutex_lock()
> > +
> > +Memory ordering guarantees changes:
> > +                fully ordered --> RELEASE ordering + control dependency +
> > +                                  hold spin_lock() on success
> 
> Same.
> 
> This looks like a good start to helping people answer questions about
> refcount_t memory ordering. Thanks!
> 
> -Kees
> 
> --
> Kees Cook
> Pixel Security

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ