lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <524C75F6.1080701@hp.com>
Date:	Wed, 02 Oct 2013 15:37:26 -0400
From:	Waiman Long <waiman.long@...com>
To:	Jason Low <jason.low2@...com>
CC:	Tim Chen <tim.c.chen@...ux.intel.com>,
	Davidlohr Bueso <davidlohr@...com>,
	Ingo Molnar <mingo@...e.hu>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Andrea Arcangeli <aarcange@...hat.com>,
	Alex Shi <alex.shi@...aro.org>,
	Andi Kleen <andi@...stfloor.org>,
	Michel Lespinasse <walken@...gle.com>,
	Davidlohr Bueso <davidlohr.bueso@...com>,
	Matthew R Wilcox <matthew.r.wilcox@...el.com>,
	Dave Hansen <dave.hansen@...el.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Rik van Riel <riel@...hat.com>,
	Peter Hurley <peter@...leysoftware.com>,
	linux-kernel@...r.kernel.org, linux-mm <linux-mm@...ck.org>
Subject: Re: [PATCH v6 5/6] MCS Lock: Restructure the MCS lock defines and
 locking code into its own file

On 10/02/2013 03:30 PM, Jason Low wrote:
> On Wed, Oct 2, 2013 at 12:19 PM, Waiman Long<waiman.long@...com>  wrote:
>> On 09/26/2013 06:42 PM, Jason Low wrote:
>>> On Thu, 2013-09-26 at 14:41 -0700, Tim Chen wrote:
>>>> Okay, that would makes sense for consistency because we always
>>>> first set node->lock = 0 at the top of the function.
>>>>
>>>> If we prefer to optimize this a bit though, perhaps we can
>>>> first move the node->lock = 0 so that it gets executed after the
>>>> "if (likely(prev == NULL)) {}" code block and then delete
>>>> "node->lock = 1" inside the code block.
>>>>
>>>> static noinline
>>>> void mcs_spin_lock(struct mcs_spin_node **lock, struct mcs_spin_node
>>>> *node)
>>>> {
>>>>          struct mcs_spin_node *prev;
>>>>
>>>>          /* Init node */
>>>>          node->next   = NULL;
>>>>
>>>>          prev = xchg(lock, node);
>>>>          if (likely(prev == NULL)) {
>>>>                  /* Lock acquired */
>>>>                  return;
>>>>          }
>>>>          node->locked = 0;
>>
>> You can remove the locked flag setting statement inside if (prev == NULL),
>> but you can't clear the locked flag after xchg(). In the interval between
>> xchg() and locked=0, the previous lock owner may come in and set the flag.
>> Now if your clear it, the thread will loop forever. You have to clear it
>> before xchg().
> Yes, in my most recent version, I left locked = 0 in its original
> place so that the xchg() can act as a barrier for it.
>
> The other option would have been to put another barrier after locked =
> 0. I went with leaving locked = 0 in its original place so that we
> don't need that extra barrier.

I don't think putting another barrier after locked=0 will work. 
Chronologically, the flag must be cleared before the node address is 
saved in the lock field. There is no way to guarantee that except by 
putting the locked=0 before xchg().

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ