lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 22 Apr 2013 12:42:49 -0400
From:	Jiannan Ouyang <ouyang@...pitt.edu>
To:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
Cc:	LKML <linux-kernel@...r.kernel.org>,
	Peter Zijlstra <peterz@...radead.org>,
	Avi Kivity <avi.kivity@...il.com>,
	Gleb Natapov <gleb@...hat.com>, Ingo Molnar <mingo@...hat.com>,
	Marcelo Tosatti <mtosatti@...hat.com>,
	Rik van Riel <riel@...hat.com>,
	Srikar <srikar@...ux.vnet.ibm.com>,
	"H. Peter Anvin" <hpa@...or.com>,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	KVM <kvm@...r.kernel.org>, Thomas Gleixner <tglx@...utronix.de>,
	Chegu Vinod <chegu_vinod@...com>,
	"Andrew M. Theurer" <habanero@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <srivatsa.vaddagiri@...il.com>,
	Andrew Jones <drjones@...hat.com>
Subject: Re: Preemptable Ticket Spinlock

On Mon, Apr 22, 2013 at 1:58 AM, Raghavendra K T
<raghavendra.kt@...ux.vnet.ibm.com> wrote:

>  The intuition is to
>>
>> downgrade a fair lock to an unfair lock automatically upon preemption,
>> and preserve the fairness otherwise.
>
>
> I also hope being little unfair, does not affect the original intention
> of introducing ticket spinlocks too.
> Some discussions were here long back in this thead,
> https://lkml.org/lkml/2010/6/3/331
>

Good point. I also have the question that why not use unfair lock
under virtual environment,
and is fairness really a big issue. However, given that current kernel
is using ticket lock, I
assume fairness is a necessary spinlock feature.

Regard the fairness of preemptable-lock, I did a user space experiment
using 8 pCPU to
compete on one spinlock, and count the lock acquisition times. Results
show that lock acquisition
counts are *almost* evenly distributed between threads in preemptable-lock.

>
> AFAIU, the experiments are on non PLE machines and it would be worth
> experimenting on PLE machines too. and also bigger machines.
> (we may get some surprises there otherwise).
> 'll wait for your next iteration of the patches with "using lower bit"
> changes.
>

Yes, they are on PLE machines. Current implementation and evaluation
is still at the stage of
concept proving. More experiments (with PLE, bigger machiens, etc) are
needed to better understand
the lock behavior.

>> +#define TIMEOUT_UNIT (1<<14)
>
>
> This value seem to be at the higher end. But I hope you have experimented
> enough to come up with this. Better again to test all these tunables?? on
> PLE machines too.
>
>
I actually didn't tune this parameter at all... But yes, find a better
value is necessary if it is in industry code.

>>   static __always_inline void __ticket_spin_lock(arch_spinlock_t *lock)
>>   {
>>          register struct __raw_tickets inc = { .tail = 1 };
>> +       unsigned int timeout = 0;
>> +       __ticket_t current_head;
>>
>>          inc = xadd(&lock->tickets, inc);
>> -
>> +       if (likely(inc.head == inc.tail))
>> +               goto spin;
>> +
>> +       timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);
>> +       do {
>> +               current_head = ACCESS_ONCE(lock->tickets.head);
>> +               if (inc.tail <= current_head) {
>> +                       goto spin;
>> +               } else if (inc.head != current_head) {
>> +                       inc.head = current_head;
>> +                       timeout =  TIMEOUT_UNIT * (inc.tail - inc.head);
>
>
> Good idea indeed to base the loop on head and tail difference.. But for
> virtualization I believe this "directly proportional notion" is little
> tricky too.
>

Could you explain your concern a little bit more?

Thanks
--Jiannan
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ