lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <53EE464D.7060803@hp.com>
Date:	Fri, 15 Aug 2014 13:41:33 -0400
From:	Waiman Long <waiman.long@...com>
To:	Davidlohr Bueso <davidlohr@...com>
CC:	peterz@...radead.org, mingo@...nel.org, jason.low2@...com,
	scott.norton@...com, aswin@...com, linux-kernel@...r.kernel.org
Subject: Re: [PATCH -tip] locking/mutexes: Avoid bogus wakeups after lock
 stealing

On 08/14/2014 01:30 PM, Davidlohr Bueso wrote:
> On Thu, 2014-08-14 at 13:17 -0400, Waiman Long wrote:
>>
>> I still think it is better to do that after spin_lock_mutex().
> As mentioned, this causes all sorts of hung tasks when the another task
> enters the slowpath when locking. There's a big fat comment above.
>
>> In
>> addition, the atomic_set() is racy. It is better to something like
> Why is it racy? Atomically setting the lock to -1 given that the lock
> was stolen should be safe. The alternative we discussed with Jason was
> to set the counter to -1 in the spinning path. But given that we need to
> serialize the counter check with the list_empty() check that would
> require the wait_lock. This is very messy and unnecessarily complicates
> things.
>
Let's consider the following scenario:

   Task 1                                  Task 2
   ------                                  ------
                                         steal the lock
if (mutex_has_owner) {                       :
         : <---- a long interrupt        mutex_unlock() [cnt = 1]
     atomic_set(cnt, -1);
     return;
}

Now the lock is no longer available and all the tasks that are trying
to get it will hang. IOW, you cannot set the count to -1 unless you
are sure it is 0 to begin with.

>>     if (atomic_cmpxchg(&lock->count, 0, -1)<= 0)
>>       return;
> Not really because some archs leave the lock at 1 after the unlock
> fastpath.

Yes, I know that. I am saying x86 won't get any benefit from this patch.

-Longman
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ