lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Y+f4TYZ9BPlt8y8B@rowland.harvard.edu>
Date:   Sat, 11 Feb 2023 15:19:25 -0500
From:   Alan Stern <stern@...land.harvard.edu>
To:     Joel Fernandes <joel@...lfernandes.org>
Cc:     "Paul E. McKenney" <paulmck@...nel.org>,
        linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
        kernel-team@...a.com, mingo@...nel.org, parri.andrea@...il.com,
        will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
        npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
        luc.maranget@...ia.fr, akiyks@...il.com
Subject: Re: Current LKMM patch disposition

On Sat, Feb 11, 2023 at 05:18:17PM +0000, Joel Fernandes wrote:
> I am happy to make changes to explanation.txt (I am assuming that's the file
> you mentioned), but I was wondering what you thought of the following change.
> If the formulas are split up, that itself could be some documentation as
> well. I did add a small paragraph on the top of the formulas as well though.
> 
> Some light testing shows it works with the cross-CPU litmus test (could still
> have bugs though and needs more testing).
> 
> Let me know how you feel about it, and if I should submit something along
> these lines along with your suggestion to edit the explanation.txt. Thanks!
> 
> diff --git a/tools/memory-model/linux-kernel.bell b/tools/memory-model/linux-kernel.bell
> index ce068700939c..1390d1b8ceee 100644
> --- a/tools/memory-model/linux-kernel.bell
> +++ b/tools/memory-model/linux-kernel.bell
> @@ -57,9 +57,28 @@ let rcu-rscs = let rec
>  flag ~empty Rcu-lock \ domain(rcu-rscs) as unmatched-rcu-lock
>  flag ~empty Rcu-unlock \ range(rcu-rscs) as unmatched-rcu-unlock
>  
> -(* Compute matching pairs of nested Srcu-lock and Srcu-unlock *)
> -let carry-srcu-data = (data ; [~ Srcu-unlock] ; rf)*
> -let srcu-rscs = ([Srcu-lock] ; carry-srcu-data ; data ; [Srcu-unlock]) & loc
> +(* SRCU read-side section modeling
> + * Compute matching pairs of nested Srcu-lock and Srcu-unlock:
> + * Each SRCU read-side critical section is treated as independent, of other
> + * overlapping SRCU read-side critical sections even when on the same domain.
> + * For this, each Srcu-lock and Srcu-unlock pair is treated as loads and
> + * stores, with the data-dependency flow also treated as independent to prevent
> + * fusing. *)

Even that is more than I would put in the source file.  Basically, the 
memory model is so complex that trying to document it in comments is 
hopeless.  The comments would have to be many times longer than the 
actual code -- as is the case here with just this little part of the 
model.  That's the main reason why I made explanation.txt a completely 
separate file.

> +
> +(* Data dependency between lock and idx store *)
> +let srcu-lock-to-store-idx = ([Srcu-lock]; data)
> +
> +(* Data dependency between idx load and unlock *)
> +let srcu-load-idx-to-unlock = (data; [Srcu-unlock])
> +
> +(* Read-from dependency between idx store on one CPU and load on same/another.
> + * This is required to model the splitting of critical section across CPUs. *)
> +let srcu-store-to-load-idx = (rf ; srcu-load-idx-to-unlock)
> +
> +(* SRCU data dependency flow. Exclude the Srcu-unlock to not transcend back to back rscs *)
> +let carry-srcu-data = (srcu-lock-to-store-idx ; [~ Srcu-unlock] ; srcu-store-to-load-idx)*
> +
> +let srcu-rscs = ([Srcu-lock] ; carry-srcu-data ; [Srcu-unlock]) & loc

That doesn't look right at all.  Does it work with the following?

P0(struct srcu_struct *lock)
{
	int r1;

	r1 = srcu_read_lock(lock);
	srcu_read_unlock(lock, r1);
}

or

P0(struct srcu_struct *lock, int *x, int *y)
{
	*x = srcu_read_lock(lock);
	*y = *x;
	srcu_read_unlock(lock, *y);
}

The idea is that the value returned by srcu_read_lock() can be stored to 
and loaded from a sequence (possibly of length 0) of variables, and the 
final load gets fed to srcu_read_unlock().  That's what the original 
version of the code expresses.

I'm not sure that breaking this relation up into pieces will make it any 
easier to understand.

Alan

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ