lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 12 Aug 2016 13:01:34 -0400
From:	Waiman Long <waiman.long@....com>
To:	Dave Hansen <dave.hansen@...el.com>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
	<x86@...nel.org>, Jiang Liu <jiang.liu@...ux.intel.com>,
	Borislav Petkov <bp@...e.de>,
	Andy Lutomirski <luto@...nel.org>,
	Prarit Bhargava <prarit@...hat.com>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Randy Wright <rwright@....com>,
	John Stultz <john.stultz@...aro.org>
Subject: Re: [RESEND PATCH v4] x86/hpet: Reduce HPET counter read contention

On 08/11/2016 08:31 PM, Dave Hansen wrote:
> On 08/11/2016 04:22 PM, Waiman Long wrote:
>> On 08/11/2016 03:32 PM, Dave Hansen wrote:
>>> It's a real bummer that this all has to be open-coded.  I have to wonder
>>> if there were any alternatives that you tried that were simpler.
>> What do you mean by "open-coded"? Do you mean the function can be inlined?
> I just mean that it's implementing its own locking instead of being able
> to use spinlocks or seqlocks, or some other existing primitive.

The reason for using a special lock is that I want both sequence number 
update and locking to be done together atomically. They can be made 
separate as is in the seqlock. However, that will make the code more 
complex to make sure that all the threads see a consistent set of lock 
state and sequence number.

>>> Is READ_ONCE()/smp_store_release() really strong enough here?  It
>>> guarantees ordering, but you need ordering *and* a guarantee that your
>>> write is visible to the reader.  Don't you need actual barriers for
>>> that?  Otherwise, you might be seeing a stale HPET value, and the spin
>>> loop that you did waiting for it to be up-to-date was worthless.  The
>>> seqlock code, uses barriers, btw.
>> The cmpxchg() and smp_store_release() act as the lock/unlock sequence
>> with the proper barriers. Another important point is that the hpet value
>> is visible to the other readers  before the sequence number. This is
>> what the smp_store_release() is providing. cmpxchg is an actual barrier,
>> even though smp_store_release() is not. However, the x86 architecture
>> will guarantee the writes are in order, I think.
> The contended case (where HPET_SEQ_LOCKED(seq)) doesn't do the cmpxchg.
>   So it's entirely relying on the READ_ONCE() on the "reader" side and
> the cmpxchg/smp_store_release() on the "writer".  This probably works in
> practice, but I'm not sure it's guaranteed behavior.
>

It is true that the latency where the sequence number change becomes 
visible to others can be unpredictable. All the code in the writer side 
is doing is to make sure that the new HPET value is visible before the 
sequence number change. Do you know of a way to reduce the latency 
without introducing too much overhead, like changing the 
smp_store_release() to smp_store_mb(), maybe?

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ