lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 23 Dec 2010 23:54:24 -0500
From:	Steven Rostedt <rostedt@...dmis.org>
To:	Gregory Haskins <ghaskins@...ell.com>
Cc:	linux-kernel@...r.kernel.org, Lai Jiangshan <laijs@...fujitsu.com>,
	Ingo Molnar <mingo@...e.hu>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Morreale <PMorreale@...ell.com>
Subject: Re: [RFC][RT][PATCH 3/4] rtmutex: Revert Optimize rt lock wakeup

On Thu, 2010-12-23 at 21:45 -0700, Gregory Haskins wrote:
> Hey Steve,
> 
> >>> On 12/23/2010 at 05:47 PM, in message <20101223225116.729981172@...dmis.org>,
> Steven Rostedt <rostedt@...dmis.org> wrote: 
> > From: Steven Rostedt <srostedt@...hat.com>
> > 
> > The commit: rtmutex: Optimize rt lock wakeup
> > 
> > Does not do what it was suppose to do.
> > This is because the adaptive waiter sets its state to TASK_(UN)INTERRUPTIBLE
> > before going into the loop. Thus, the test in wakeup_next_waiter()
> > will always fail on an adaptive waiter, as it only tests to see if
> > the pending waiter never has its state set ot TASK_RUNNING unless
> > something else had woke it up.
> > 
> > The smp_mb() added to make this test work is just as expensive as
> > just calling wakeup. And since we we fail to wake up anyway, we are
> > doing both a smp_mb() and wakeup as well.
> > 
> > I tested this with dbench and we run faster without this patch.
> > I also tried a variant that instead fixed the loop, to change the state
> > only if the spinner was to go to sleep, and that still did not show
> > any improvement.
> 
> Just a quick note to say I am a bit skeptical of this patch.  I know you are offline next week, so lets plan on hashing it out after the new year before I ack it.

Sure, but as I said, it is mostly broken anyway. I could even insert
some tracepoints to show that this is always missed (heck I'll add an
unlikely and do the branch profiler ;-)

The reason is that adaptive spinners spin in some other state than
TASK_RUNNING, thus it does not help adaptive spinners at all. I first
tried to fix that, but it made dbench run even slower. But I only did a
few tests, and only on a 4 CPU box, so it was a rather small sample. The
removal of the code had to deal with more that it was already broken
than anything else.

But yeah, we can hash this out in the new year. This is one of the
reasons I only posted this patch set as an RFC.

> 
> Happy holidays!

You too!

-- Steve


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ