lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 29 Jan 2014 12:57:57 -0500
From:	Waiman Long <waiman.long@...com>
To:	Andi Kleen <andi@...stfloor.org>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, Arnd Bergmann <arnd@...db.de>,
	linux-arch@...r.kernel.org, x86@...nel.org,
	linux-kernel@...r.kernel.org,
	Peter Zijlstra <peterz@...radead.org>,
	Steven Rostedt <rostedt@...dmis.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Michel Lespinasse <walken@...gle.com>,
	Rik van Riel <riel@...hat.com>,
	"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Raghavendra K T <raghavendra.kt@...ux.vnet.ibm.com>,
	George Spelvin <linux@...izon.com>,
	Tim Chen <tim.c.chen@...ux.intel.com>,
	Daniel J Blueman <daniel@...ascale.com>,
	Alexander Fyodorov <halcy@...dex.ru>,
	Aswin Chandramouleeswaran <aswin@...com>,
	Scott J Norton <scott.norton@...com>,
	Thavatchai Makphaibulchoke <thavatchai.makpahibulchoke@...com>
Subject: Re: [PATCH v3 1/2] qspinlock: Introducing a 4-byte queue spinlock
 implementation

On 01/28/2014 07:20 PM, Andi Kleen wrote:
> So the 1-2 threads case is the standard case on a small
> system, isn't it? This may well cause regressions.
>

Yes, it is possible that in a lightly contended case, the queue spinlock 
maybe a bit slower because of the slowpath overhead. I observed some 
slight slowdown in some of the lightly contended workloads. I will run 
more test in a smaller 2-socket system or even a 1-socket system to see 
if there is observed regression.

>> In the extremely unlikely case that all the queue node entries are
>> used up, the current code will fall back to busy spinning without
>> waiting in a queue with warning message.
> Traditionally we had some code which could take thousands
> of locks in rare cases (e.g. all locks in a hash table or all locks of
> a big reader lock)
>
> The biggest offender was the mm for changing mmu
> notifiers, but I believe that's a mutex now.
> lglocks presumably still can do it on large enough
> systems.  I wouldn't be surprised if there is
> other code which e.g. make take all locks in a table.
>
> I don't think the warning is valid and will
> likely trigger in some obscure cases.
>
> -Andi

As explained by George, the queue node is only needed when the thread is 
waiting to acquire the lock. Once it gets the lock, the node can be 
released and be reused.

-Longman

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ