lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 03 Jan 2013 23:49:25 +0530
From:	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>
To:	Michel Lespinasse <walken@...gle.com>
CC:	Raghavendra KT <raghavendra.kt.linux@...il.com>,
	Rik van Riel <riel@...hat.com>, linux-kernel@...r.kernel.org,
	aquini@...hat.com, eric.dumazet@...il.com, lwoodman@...hat.com,
	jeremy@...p.org, Jan Beulich <JBeulich@...ell.com>,
	Thomas Gleixner <tglx@...utronix.de>, knoel@...hat.com
Subject: Re: [RFC PATCH 2/5] x86,smp: proportional backoff for ticket spinlocks

On 01/03/2013 05:12 PM, Michel Lespinasse wrote:
> On Thu, Jan 3, 2013 at 3:35 AM, Raghavendra KT
> <raghavendra.kt.linux@...il.com> wrote:
>> [Ccing IBM id]
>> On Thu, Jan 3, 2013 at 10:52 AM, Rik van Riel <riel@...hat.com> wrote:
>>> Simple fixed value proportional backoff for ticket spinlocks.
>>> By pounding on the cacheline with the spin lock less often,
>>> bus traffic is reduced. In cases of a data structure with
>>> embedded spinlock, the lock holder has a better chance of
>>> making progress.
>>>
>>> If we are next in line behind the current holder of the
>>> lock, we do a fast spin, so as not to waste any time when
>>> the lock is released.
>>>
>>> The number 50 is likely to be wrong for many setups, and
>>> this patch is mostly to illustrate the concept of proportional
>>> backup. The next patch automatically tunes the delay value.
>>>
>>> Signed-off-by: Rik van Riel <riel@...hat.com>
>>> Signed-off-by: Michel Lespinasse <walken@...gle.com>
>>> ---
>>>   arch/x86/kernel/smp.c |   23 ++++++++++++++++++++---
>>>   1 files changed, 20 insertions(+), 3 deletions(-)
>>>
>>> diff --git a/arch/x86/kernel/smp.c b/arch/x86/kernel/smp.c
>>> index 20da354..9c56fe3 100644
>>> --- a/arch/x86/kernel/smp.c
>>> +++ b/arch/x86/kernel/smp.c
>>> @@ -117,11 +117,28 @@ static bool smp_no_nmi_ipi = false;
>>>    */
>>>   void ticket_spin_lock_wait(arch_spinlock_t *lock, struct __raw_tickets inc)
>>>   {
>>> +       __ticket_t head = inc.head, ticket = inc.tail;
>>> +       __ticket_t waiters_ahead;
>>> +       unsigned loops;
>>> +
>>>          for (;;) {
>>> -               cpu_relax();
>>> -               inc.head = ACCESS_ONCE(lock->tickets.head);
>>> +               waiters_ahead = ticket - head - 1;
>>                                               ^^^^^^^^^^^^^^
>> Just wondering,
>> Does wraparound affects this?
>
> The result gets stored in waiters_ahead, which is unsigned and has
> same bit size as ticket and head. So, this takes care of the
> wraparound issue.
>
> In other words, you may have to add 1<<8 or 1<<16 if the integer
> difference was negative; but you get that for free by just computing
> the difference as a 8 or 16 bit unsigned value.
>

Michael,
Sorry for the noise and for missing the simple math :) and Thanks for 
explanation.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ