[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <478E76E6-8DA4-4F55-B789-8D8DE0FE6621@joelfernandes.org>
Date: Sat, 11 Feb 2023 22:35:44 -0500
From: Joel Fernandes <joel@...lfernandes.org>
To: Alan Stern <stern@...land.harvard.edu>
Cc: "Paul E. McKenney" <paulmck@...nel.org>,
linux-kernel@...r.kernel.org, linux-arch@...r.kernel.org,
kernel-team@...a.com, mingo@...nel.org, parri.andrea@...il.com,
will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
luc.maranget@...ia.fr, akiyks@...il.com
Subject: Re: Current LKMM patch disposition
> On Feb 11, 2023, at 9:59 PM, Alan Stern <stern@...land.harvard.edu> wrote:
>
> On Sat, Feb 11, 2023 at 07:30:32PM -0500, Joel Fernandes wrote:
>>> On Sat, Feb 11, 2023 at 3:19 PM Alan Stern <stern@...land.harvard.edu> wrote:
>>> The idea is that the value returned by srcu_read_lock() can be stored to
>>> and loaded from a sequence (possibly of length 0) of variables, and the
>>> final load gets fed to srcu_read_unlock(). That's what the original
>>> version of the code expresses.
>>
>> Now I understand it somewhat, and I see where I went wrong. Basically,
>> you were trying to sequence zero or one "data + rf sequence" starting
>> from lock and unlock with a final "data" sequence. That data sequence
>> can be between the srcu-lock and srcu-unlock itself, in case where the
>> lock/unlock happened on the same CPU.
>
> In which case the sequence has length 0. Exactly right.
Got it.
>
>> Damn, that's really complicated.. and I still don't fully understand it.
>
> It sounds like you've made an excellent start. :-)
Thanks. :-)
>
>> In trying to understand your CAT code, I made some assumptions about
>> your formulas by reverse engineering the CAT code with the tests,
>> which is kind of my point that it is extremely hard to read CAT. That
>
> I can't argue against that; it _is_ hard. It helps to have had the
> right kind of training beforehand (my degree was in mathematical logic).
Got it, I am reviewing the academic material on these as well.
>> is kind of why I want to understand the CAT, because for me
>> explanation.txt is too much at a higher level to get a proper
>> understanding of the memory model.. I tried re-reading explanation.txt
>> many times.. then I realized I am just rewriting my own condensed set
>> of notes every few months.
>
> Would you like to post a few examples showing some of the most difficult
> points you encountered? Maybe explanation.txt can be improved.
Sure, I will share some things I faced difficult with, tomorrow or so. Off the top, cumulativity was certainly pretty hard to parse.
>
>>> I'm not sure that breaking this relation up into pieces will make it any
>>> easier to understand.
>>
>> Yes, but I tried. I will keep trying to understand your last patch
>> more. Especially I am still not sure, why in the case of an SRCU
>> reader on a single CPU, the following does not work:
>> let srcu-rscs = ([Srcu-lock]; data; [Srcu-unlock]).
>
> You have to understand that herd7 does not track dependencies through
> stores and subsequent loads. That is, if you have something like:
>
> r1 = READ_ONCE(*x);
> WRITE_ONCE(*y, r1);
> r2 = READ_ONCE(*y);
> WRITE_ONCE(*z, r2);
>
> then herd7 will realize that the write to y depends on the value read
> from x, and it will realize that the write to z depends on the value
> read from y. But it will not realize that the write to z depends on the
> value read from x; it loses track of that dependency because of the
> intervening store/load from y.
>
> More to the point, if you have:
>
> r1 = srcu_read_lock(lock);
> WRITE_ONCE(*y, r1);
> r2 = READ_ONCE(*y);
> srcu_read_unlock(lock, r2);
>
> then herd7 will not realize that the value of r2 depends on the value of
> r1. So there will be no data dependency from the srcu_read_lock() to
> the srcu_read_unlock().
Got it! Now I understand why the intermediate load stores needs to be modeled with your carry-srcu-data formula, even on the same CPU! Thank you so much Alan!!
Thanks,
- Joel
>
> Alan
Powered by blists - more mailing lists