lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Sat, 11 Feb 2023 15:49:39 +0000
From:   Joel Fernandes <joel@...lfernandes.org>
To:     Alan Stern <stern@...land.harvard.edu>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>,
        linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
        kernel-team@...a.com, mingo@...nel.org, parri.andrea@...il.com,
        will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
        npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
        luc.maranget@...ia.fr, akiyks@...il.com
Subject: Re: Current LKMM patch disposition

On Mon, Feb 06, 2023 at 04:22:57PM -0500, Joel Fernandes wrote:
> On Mon, Feb 6, 2023 at 1:39 PM Alan Stern <stern@...land.harvard.edu> wrote:
> >
> > On Sun, Feb 05, 2023 at 02:10:29PM +0000, Joel Fernandes wrote:
> > > On Sat, Feb 04, 2023 at 02:24:11PM -0800, Paul E. McKenney wrote:
> > > > On Sat, Feb 04, 2023 at 09:58:12AM -0500, Alan Stern wrote:
> > > > > On Fri, Feb 03, 2023 at 05:49:41PM -0800, Paul E. McKenney wrote:
> > > > > > On Fri, Feb 03, 2023 at 08:28:35PM -0500, Alan Stern wrote:
> > > > > > > The "Provide exact semantics for SRCU" patch should have:
> > > > > > >
> > > > > > >       Portions suggested by Boqun Feng and Jonas Oberhauser.
> > > > > > >
> > > > > > > added at the end, together with your Reported-by: tag.  With that, I
> > > > > > > think it can be queued for 6.4.
> > > > > >
> > > > > > Thank you!  Does the patch shown below work for you?
> > > > > >
> > > > > > (I have tentatively queued this, but can easily adjust or replace it.)
> > > > >
> > > > > It looks fine.
> > > >
> > > > Very good, thank you for looking it over!  I pushed it out on branch
> > > > stern.2023.02.04a.
> > > >
> > > > Would anyone like to ack/review/whatever this one?
> > >
> > > Would it be possible to add comments, something like the following? Apologies
> > > if it is missing some ideas. I will try to improve it later.
> > >
> > > thanks!
> > >
> > >  - Joel
> > >
> > > ---8<-----------------------
> > >
> > > diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
> > > index ce068700939c..0a16177339bc 100644
> > > --- a/tools/memory-model/linux-kernel.bell
> > > +++ b/tools/memory-model/linux-kernel.bell
> > > @@ -57,7 +57,23 @@ let rcu-rscs = let rec
> > >  flag ~empty Rcu-lock \ domain(rcu-rscs) as unmatched-rcu-lock
> > >  flag ~empty Rcu-unlock \ range(rcu-rscs) as unmatched-rcu-unlock
> > >
> > > +(***************************************************************)
> > >  (* Compute matching pairs of nested Srcu-lock and Srcu-unlock *)
> > > +(***************************************************************)
> > > +(*
> > > + * carry-srcu-data: To handle the case of the SRCU critical section split
> > > + * across CPUs, where the idx is used to communicate the SRCU index across CPUs
> > > + * (say CPU0 and CPU1), data is between the R[srcu-lock] to W[once][idx] on
> > > + * CPU0, which is sequenced with the ->rf is between the W[once][idx] and the
> > > + * R[once][idx] on CPU1.  The carry-srcu-data is made to exclude Srcu-unlock
> > > + * events to prevent capturing accesses across back-to-back SRCU read-side
> > > + * critical sections.
> > > + *
> > > + * srcu-rscs: Putting everything together, the carry-srcu-data is sequenced with
> > > + * a data relation, which is the data dependency between R[once][idx] on CPU1
> > > + * and the srcu-unlock store, and loc ensures the relation is unique for a
> > > + * specific lock.
> > > + *)
> > >  let carry-srcu-data = (data ; [~ Srcu-unlock] ; rf)*
> > >  let srcu-rscs = ([Srcu-lock] ; carry-srcu-data ; data ; [Srcu-unlock]) & loc
> >
> > My tendency has been to keep comments in the herd7 files to a minimum
> > and to put more extended descriptions in the explanation.txt file.
> > Right now that file contains almost nothing (a single paragraph!) about
> > SRCU, so it needs to be updated to talk about the new definition of
> > srcu-rscs.  In my opinion, that's where this sort of comment belongs.
> 
> That makes sense, I agree.
> 
> > Joel, would you like to write an extra paragraph of two for that file,
> > explaining in more detail how SRCU lock-to-unlock matching is different
> > from regular RCU and how the definition of the srcu-rscs relation works?
> > I'd be happy to edit anything you come up with.
> 
> Yes I would love to, I'll spend some more time studying this up a bit
> more so I don't write nonsense. But yes, I am quite interested in
> writing something up and I will do so!

Hi Alan, all,

One thing I noticed: Shouldn't the model have some notion of fences with the
srcu lock primitive? SRCU implementation in the kernel does an unconditional
memory barrier on srcu_read_lock() (which it has to do for a number of
reasons including correctness), but currently both with/without this patch,
the following returns "Sometimes", instead of "Never". Sorry if this was
discussed before:

C MP+srcu

(*
 * Result: Sometimes
 *
 * If an srcu_read_unlock() is called between 2 stores, they should propogate
 * in order.
 *)

{}

P0(struct srcu_struct *s, int *x, int *y)
{
	int r1;

	r1 = srcu_read_lock(s);
	WRITE_ONCE(*x, 1);
	srcu_read_unlock(s, r1); // replace with smp_mb() makes Never.
	WRITE_ONCE(*y, 1);
}

P1(struct srcu_struct *s, int *x, int *y)
{
	int r1;
	int r2;

	r1 = READ_ONCE(*y);
	smp_rmb();
	r2 = READ_ONCE(*x);
}

exists (1:r1=1 /\ 1:r2=0)

Also, one more general (and likely silly) question about reflexive-transitive closures.

Say you have 2 relations, R1 and R2. Except that R2 is completely empty.

What does (R1; R2)* return?

I expect (R1; R2) to be empty, since there does not exist a tail in R1, that
is a head in R2.

However, that does not appear to be true like in the carry-srcu-data relation
in Alan's patch. For instance, if I have a simple litmus test with a single
reader on a single CPU, and an updater on a second CPU, I see that
carry-srcu-data is a bunch of self-loops on all individual loads and stores
on all CPUs, including the loads and stores surrounding the updater's
synchronize_srcu() call, far from being an empty relation!

Thanks!

 - Joel

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ