lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 12 Apr 2016 14:55:13 -0400
From:	Waiman Long <waiman.long@....com>
To:	Andy Lutomirski <luto@...capital.net>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Jonathan Corbet <corbet@....net>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	<linux-doc@...nel.org>, X86 ML <x86@...nel.org>,
	Jiang Liu <jiang.liu@...ux.intel.com>,
	Borislav Petkov <bp@...e.de>,
	Andy Lutomirski <luto@...nel.org>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Randy Wright <rwright@....com>
Subject: Re: [PATCH v3] x86/hpet: Reduce HPET counter read contention

On 04/12/2016 12:37 PM, Waiman Long wrote:
> On 04/11/2016 04:21 PM, Andy Lutomirski wrote:
>>
>>> +
>>> +                       /* Unlock */
>>> +                       smp_store_release(&hpet_save.seq, new + 1);
>>> +                       local_irq_restore(flags);
>>> +                       return (cycle_t)time;
>>> +               }
>>> +               local_irq_restore(flags);
>>> +               seq = new;
>>> +       }
>>> +
>>> +       /*
>>> +        * Wait until the locked sequence number changes which 
>>> indicates
>>> +        * that the saved HPET value is up-to-date.
>>> +        */
>>> +       while (READ_ONCE(hpet_save.seq) == seq) {
>>> +               /*
>>> +                * Since reading the HPET is much slower than a single
>>> +                * cpu_relax() instruction, we use two here in an 
>>> attempt
>>> +                * to reduce the amount of cacheline contention in the
>>> +                * hpet_save.seq cacheline.
>>> +                */
>>> +               cpu_relax();
>>> +               cpu_relax();
>>> +       }
>>> +
>>> +       return (cycle_t)READ_ONCE(hpet_save.hpet);
>>> +}
>> I wonder if this could be simplified.  Pseudocode:
>>
>> u32 time;
>> unsigned long flags;
>>
>> local_irq_save(flags);
>>
>> if (spin_trylock(&hpet_lock)) {
>>    time = hpet_readl(HPET_COUNTER);
>>    WRITE_ONCE(last_hpet_counter, time);
>
> You will need a spin_unlock(&hpet_lock) here.
>
>> } else {
>>    spin_unlock_wait(&hpet_lock);
>>    /* When this function started, hpet_lock was locked.  Now it's
>> unlocked, which means that time is at least as new as whatever the
>> lock holder returned. */
>>   time = READ_ONCE(last_hpet_counter);
>> }
>>
>> local_irq_restore(flags);
>> return time;
>>
>> Should be fasterunder heavy contention, too: spinlocks are very nicely
>> optimized.
>
> I don't think it will be faster. The current spinlock code isn't more 
> optimized than what you can do with a cmpxchg and smp_store_release. 
> In fact, it is what the spinlock code is actually doing. Other 
> differences includes:
>
> 1) A CPU will not do local_irq_save/local_irq_restore when the lock is 
> not free.
> 2) My patch also use a change a sequence number to indicate an updated 
> time stamp is available. So there will be cases where CPUs running 
> your code will have to wait while the those running my code can grab 
> the time stamp and return immediately.
>

Moreover, if the timing is such that right after one CPU release the 
lock, the next one get it immediately and there is a continuous stream 
of incoming CPUs. It is possible that the ones that are waiting for the 
lock to be free will see the lock being acquired for an indefinite 
period of time. This is certainly not something we want to have and this 
is what the sequence number is for.

Cheers,
Longman

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ