lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <001f7d74-0ef9-a667-b656-bbd18491d5c1@huaweicloud.com>
Date:   Tue, 31 Jan 2023 14:56:00 +0100
From:   Jonas Oberhauser <jonas.oberhauser@...weicloud.com>
To:     Alan Stern <stern@...land.harvard.edu>
Cc:     Andrea Parri <parri.andrea@...il.com>, paulmck@...nel.org,
        will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
        npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
        luc.maranget@...ia.fr, akiyks@...il.com, dlustig@...dia.com,
        joel@...lfernandes.org, urezki@...il.com, quic_neeraju@...cinc.com,
        frederic@...nel.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 2/2] tools/memory-model: Make ppo a subrelation of po



On 1/30/2023 3:39 AM, Alan Stern wrote:
> On Sun, Jan 29, 2023 at 11:19:32PM +0100, Jonas Oberhauser wrote:
>> You could do it, by turning the relation into one massive recursive
>> definition.
> Which would make pretty much the entire memory model one big recursion.
> I do not want to do that.

Neither do I :D

>
>> Thinking about what the options are:
>> 1) accept the difference and run with it by making it consistent inside the
>> axiomatic model
>> 2) fix it through the recursive definition, which seems to be quite ugly but
>> also consistent with the power operational model as far as I can tell
>> 3) weaken the operational model... somehow
>> 4) just ignore the anomaly
>> 5) ???
>>
>> Currently my least favorite option is 4) since it seems a bit off that the
>> reasoning applies in one specific case of LKMM, more specifically the data
>> race definition which should be equivalent to "the order of the two races
>> isn't fixed", but here the order isn't fixed but it's a data race.
>> I think the patch happens to almost do 1) because the xbstar&int at the end
>> should already imply ordering through the prop&int <= hb rule.
>> What would remain is to also exclude rcu-fence somehow.
> IMO 1) is the best choice.

I have some additional thoughts now. It seems that you could weaken the 
operational model by stating that an A-cumulative fence orders 
propagation of all *external* stores (in addition to all po-earlier 
stores) that propagated to you before the fence is executed.

It seems that on power, from an operational model perspective, there's 
currently no difference between propagation fences ordering all stores 
vs only external stores that propagated to the CPU before the fence is 
executed, because they only have bidirectional (*->W) fences (sync, 
lwsync) and not uni-directional (acquire, release), and so it is not 
possible for a store that is po-later than the barrier to be executed 
before the barrier; i.e., on power, every internal store that propagates 
to a CPU before the fence executes is also po-earler than the fence.

If power did introduce release stores, I think you could potentially 
create implementations that allow the behavior in the example you have 
given, but I don't think they are the most natural ones:

> {}
>
> P0(int *x, int *y, int *z)
> {
> 	int r1;
>
> 	r1 = READ_ONCE(*x);
> 	smp_store_release(y, 1);
> 	WRITE_ONCE(*z, 1);
> }
>
> P1(int *x, int *y, int *z)
> {
> 	int r2;
>
> 	r2 = READ_ONCE(*z);
> 	WRITE_ONCE(*x, r2);
> }
>
> P2(int *x, int *y, int *z)
> {
> 	int r3;
> 	int r4;
>
> 	r3 = READ_ONCE(*y);
> 	smp_rmb();
> 	r4 = READ_ONCE(*z);
> }
>
> exists (0:r1=1 /\ 2:r3=1 /\ 2:r4=0)

I could imagine that P0 posts both of its stores in a shared store 
buffer before reading *x, but marks the release store as "not ready".
Then P1 forwards *z=1 from the store buffer and posts *x=1, which P0 
reads, and subsequently marks its release store as "ready".
Then the release store is sent to the cache, where P2 reads *y=1 and 
then *z=0.
Finally P0 sends its *z=1 store to the cache.

However, a perhaps more natural implementation would not post the 
release store to the store buffer until it is "ready", in which case the 
order in the store buffer would be *z=1 before *y=1, and in this case 
the release ordering would presumably work like your current operational 
model.

Nevertheless, perhaps this slightly weaker operational model isn't as 
absurd as it sounds. And I think many people wouldn't be shocked if the 
release store didn't provide ordering with *z=1.

Best wishes, jonas

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ