lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50E5BD0F.9040004@redhat.com>
Date:	Thu, 03 Jan 2013 12:17:03 -0500
From:	Rik van Riel <riel@...hat.com>
To:	Michel Lespinasse <walken@...gle.com>
CC:	linux-kernel@...r.kernel.org, aquini@...hat.com,
	eric.dumazet@...il.com, lwoodman@...hat.com, jeremy@...p.org,
	Jan Beulich <JBeulich@...ell.com>,
	Thomas Gleixner <tglx@...utronix.de>, knoel@...hat.com
Subject: Re: [RFC PATCH 3/5] x86,smp: auto tune spinlock backoff delay factor

On 01/03/2013 07:31 AM, Michel Lespinasse wrote:

> I'll see if I can make a more concrete proposal and still keep it
> short enough :)

Looking forward to that. I have thought about it some more,
and am still not sure about a better description for the
changelog...

>> +#define MIN_SPINLOCK_DELAY 1
>> +#define MAX_SPINLOCK_DELAY 16000
>> +DEFINE_PER_CPU(int, spinlock_delay) = { MIN_SPINLOCK_DELAY };
>
> unsigned would seem more natural here, though it's only a tiny detail

I might as well make that change while addressing the issues
you found :)

>> +
>> +               /*
>> +                * The lock is still busy; slowly increase the delay. If we
>> +                * end up sleeping too long, the code below will reduce the
>> +                * delay. Ideally we acquire the lock in the tight loop above.
>> +                */
>> +               if (!(head % 7) && delay < MAX_SPINLOCK_DELAY)
>> +                       delay++;
>> +
>> +               loops = delay * waiters_ahead;
>
> I don't like the head % 7 thing. I think using fixed point arithmetic
> would be nicer:
>
> if (delay < MAX_SPINLOCK_DELAY)
>    delay += 256/7; /* Or whatever constant we choose */
>
> loops = (delay * waiter_ahead) >> 8;

I'll do that. That could get completely rid of any artifacts
caused by incrementing sometimes, and not other times.

> Also, we should probably skip the delay increment on the first loop
> iteration - after all, we haven't waited yet, so we can't say that the
> delay was too short.

Good point. I will do that.

>> -               if (head == ticket)
>> +               if (head == ticket) {
>> +                       /*
>> +                        * We overslept and have no idea how long the lock
>> +                        * went idle. Reduce the delay as a precaution.
>> +                        */
>> +                       delay -= delay/32 + 1;
>
> There is a possibility of integer underflow here.

Fixed in my local code base now.

I will build a kernel with the things you pointed out fixed,
and will give it a spin this afternoon.

Expect new patches soonish :)

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ