lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 16 Sep 2015 23:57:18 +0000
From:	Zhu Jefferry <Jefferry.Zhu@...escale.com>
To:	Thomas Gleixner <tglx@...utronix.de>
CC:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"bigeasy@...utronix.de" <bigeasy@...utronix.de>
Subject: RE: [PATCH v2] futex: lower the lock contention on the HB lock during
 wake up

> On Wed, 16 Sep 2015, Zhu Jefferry wrote:
> > > > The primary debugging shows the content of __lock is wrong in first.
> > > > After a call of Mutex_unlock, the value of __lock should not be
> > > > this thread self. But we observed The value of __lock is still
> > > > self after unlock. So, other threads will be stuck,
> > >
> > > How did you observe that?
> >
> > Add one assert in mutex_unlock, after it finish the __lock modify
> > either in User space or kernel space, before return.
> 
> And that assert tells you that the kernel screwed up the futex value?
> No, it does not. It merily tells you that the value is not what you
> expect, but it does not tell you what caused that.
> 
> Hint: There are proper instrumentation tools, e.g. tracing, which tell
> you the exact flow of events and not just the observation after the fact.

I'm trying to get more details about the failure flow. But I'm told a little
Bit timing changing in the code might impact the failure appear in a longer time,
or even disappear.

> 
> > > > This thread could lock due to recursive type and __counter keep
> > > > increasing, although mutex_unlock return fails, due to the wrong
> > > > value of __owner, but the application did not check the return
> > > > value. So the thread 0 looks like fine. But thread 1 will be stuck
> forever.
> > >
> > > Oh well. So thread 0 looks all fine, despite not checking return
> values.
> > >
> >
> > Correct.
> 
> No. That's absolutely NOT correct. Not checking return values can cause
> all kind of corruptions. Return values are there for a reason.
> 

Besides the application did not check the return value, the mutex_unlock in
Libc did not check the return value from kernel neither. 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ