lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <31472b1b-e466-469f-be29-05bba2a3ef52@huaweicloud.com>
Date: Fri, 17 Jan 2025 13:08:25 +0100
From: Jonas Oberhauser <jonas.oberhauser@...weicloud.com>
To: paulmck@...nel.org
Cc: Alan Stern <stern@...land.harvard.edu>, parri.andrea@...il.com,
 will@...nel.org, peterz@...radead.org, boqun.feng@...il.com,
 npiggin@...il.com, dhowells@...hat.com, j.alglave@....ac.uk,
 luc.maranget@...ia.fr, akiyks@...il.com, dlustig@...dia.com,
 joel@...lfernandes.org, urezki@...il.com, quic_neeraju@...cinc.com,
 frederic@...nel.org, linux-kernel@...r.kernel.org, lkmm@...ts.linux.dev,
 hernan.poncedeleon@...weicloud.com
Subject: Re: [RFC] tools/memory-model: Rule out OOTA



Am 1/16/2025 um 8:39 PM schrieb Paul E. McKenney:
> On Thu, Jan 16, 2025 at 08:28:06PM +0100, Jonas Oberhauser wrote:
>>
>>
>> Am 1/16/2025 um 7:40 PM schrieb Paul E. McKenney:
>>> 	o	If a value is read in the seqlock reader and used
>>> 		across a "you need to retry" indication, that
>>> 		flags a seqlock data race.
>>
>>
>> This too is insufficient, you also need to prevent dereferencing or having
>> control dependency inside the seqlock. Otherwise you could derefence a torn
>> pointer and...
> 
> True, but isn't that prohibition separable from the underlying
> implementation?

Yes, but so is the prohibition of using the value after the failed 
reader_exit().

So probably it needs to be added to the specification of what you are 
allowed to do with values from the read-side critical section.

Actually this was a bug we had in some code once, and I overlooked it 
because I thought the incorrect data isn't used anyways, right?

Luckily I had put the condition into our cat model already and so the 
tooling caught the bug before it went out...


>> At this point your definition of data race becomes pretty much the same as
>> we have.
>>
>> https://github.com/open-s4c/libvsync/blob/main/vmm/vmm.cat#L150
>>
>>
>> (also this rule should only concern reads that are actually "data-racy" - if
>> the read is synchronized by some other writes, then you can read & use it
>> just fine across the seqlock [edit: boundary])
> 
> Perhaps LKMM should adopt this or something similar, but what do others
> think?

I am not sure how many others are still reading this deep into the 
conversation, maybe best to start a new thread.


>> I also noticed that in my previous e-mail I had overlooked the reads inside
>> the CS in the failure case, but you are of course right, there needs to be
>> some mechanism to prevent them from being data racy unless abused.
>>
>> But I am not sure how to formalize that in a way that is simpler than just
>> re-defining data races in general, without adding some special support to
>> herd7 for it.
>>
>> What do you think?
> 
> I was thinking in terms of identifying reads in critical sections (sort
> of like LKMM does for RCU read-side critical sections), then identifying
> any dependencies from those reads that cross a failed reader boundary.
> If that set is non-empty, flag it.

Yes, the general idea sounds reasonable, but the details have a lot of 
potential for future improvement.

One tricky part of seqlock besides the data race is that it kind of uses 
negative message passing - the fact that you have not seen the message 
means you also have not seen the flag. (And the message in this case is 
the write_enter(), and the flag is the plain access in the critical 
section! Fun.)

This makes it hard to formalize in the box of LKMM and make it play well 
with all the other pieces.


Maybe something like avoiding rw data races also under something like prop?

r-prop-post-bounded ; ((overwrite & ext) ; cumul-fence*) ; 
w-prop-pre-bounded

for the cases where the pre-bounded write must propagate after the 
overwriting store, and the post-bounded read executes before and on the 
same CPU as the overwritten event.

Then you could argue that if the overwriting write has not propagated to 
the overwritten event, the pre-bounded write also has not propagated to 
that event.

 From that you can conclude it also can not have propagated to the 
post-bounded read.

I'm not sure if the cases where propagation is handled by a strong-fence 
are already covered by the rw-xbstar rule.

Have fun,
   jonas


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ