lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Sep 2015 09:52:29 +0000
From:	Zhu Jefferry <Jefferry.Zhu@...escale.com>
To:	Thomas Gleixner <tglx@...utronix.de>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"bigeasy@...utronix.de" <bigeasy@...utronix.de>
Subject: RE: [PATCH v2] futex: lower the lock contention on the HB lock during
 wake up

> > I assume your pseudo code set_waiter_bit is mapped to the real code
> > "futex_lock_pi_atomic", It's possible for futex_lock_pi_atomic to
> > successfully set FUTEX_WAITERS bit, but return with Page fault, for
> > example, like fail in lookup_pi_state().
> 
> No. It's not. lookup_pi_state() cannot return EFAULT. The only function
> which can fault inside of lock_pi_update_atomic() is the actual cmpxchg.
> Though lock_pi_update_atomic() can successfully set the waiter bit and
> then return with some other failure code (ESRCH, EAGAIN, ...). But that
> does not matter at all.
> 
> Any failure return will end up in a retry. And if the waker managed to
> release the futex before the retry takes place then the waiter will see
> that and take the futex.
> 
Let me try to descript the application failure here.

The application is a multi-thread program, to use the pairs of mutex_lock and 
mutex_unlock to protect the shared data structure. The type of this mutex
is PTHREAD_MUTEX_PI_RECURSIVE_NP. After running long time, to say several days,
the mutex_lock data structure in user space looks like corrupt.

   thread 0 can do mutex_lock/unlock     
   __lock = this thread | FUTEX_WAITERS
   __owner = 0, should be this thread
   __counter keep increasing, although there is no recursive mutex_lock call.

   thread 1 will be stuck 

The primary debugging shows the content of __lock is wrong in first. After a call of
Mutex_unlock, the value of __lock should not be this thread self. But we observed
The value of __lock is still self after unlock. So, other threads will be stuck,
This thread could lock due to recursive type and __counter keep increasing, 
although mutex_unlock return fails, due to the wrong value of __owner, 
but the application did not check the return value. So the thread 0 looks
like fine. But thread 1 will be stuck forever.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ