[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1385493911.25945.3.camel@buesod1.americas.hpqcorp.net>
Date: Tue, 26 Nov 2013 11:25:11 -0800
From: Davidlohr Bueso <davidlohr@...com>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Thomas Gleixner <tglx@...utronix.de>,
LKML <linux-kernel@...r.kernel.org>,
Jason Low <jason.low2@...com>, Ingo Molnar <mingo@...nel.org>,
Darren Hart <dvhart@...ux.intel.com>,
Mike Galbraith <efault@....de>, Jeff Mahoney <jeffm@...e.com>,
Linus Torvalds <torvalds@...ux-foundation.org>,
Scott Norton <scott.norton@...com>,
Tom Vaden <tom.vaden@...com>,
Aswin Chandramouleeswaran <aswin@...com>,
Waiman Long <Waiman.Long@...com>,
"Paul E. McKenney" <paulmck@...ux.vnet.ibm.com>
Subject: Re: [RFC patch 0/5] futex: Allow lockless empty check of hashbucket
plist in futex_wake()
On Tue, 2013-11-26 at 09:52 +0100, Peter Zijlstra wrote:
> On Tue, Nov 26, 2013 at 12:12:31AM -0800, Davidlohr Bueso wrote:
>
> > I am becoming hesitant about this approach. The following are some
> > results, from my quad-core laptop, measuring the latency of nthread
> > wakeups (1 at a time). In addition, failed wait calls never occur -- so
> > we don't end up including the (otherwise minimal) overhead of the list
> > queue+dequeue, only measuring the smp_mb() usage when !empty list never
> > occurs.
> >
> > +---------+--------------------+--------+-------------------+--------+----------+
> > | threads | baseline time (ms) | stddev | patched time (ms) | stddev | overhead |
> > +---------+--------------------+--------+-------------------+--------+----------+
> > | 512 | 4.2410 | 0.9762 | 12.3660 | 5.1020 | +191.58% |
> > | 256 | 2.7750 | 0.3997 | 7.0220 | 2.9436 | +153.04% |
> > | 128 | 1.4910 | 0.4188 | 3.7430 | 0.8223 | +151.03% |
> > | 64 | 0.8970 | 0.3455 | 2.5570 | 0.3710 | +185.06% |
> > | 32 | 0.3620 | 0.2242 | 1.1300 | 0.4716 | +212.15% |
> > +---------+--------------------+--------+-------------------+--------+----------+
> >
>
> Whee, this is far more overhead than I would have expected... pretty
> impressive really for a simple mfence ;-)
*sigh* I just realized I had some extra debugging options in the .config
I ran for the patched kernel. This probably explains why the huge
overhead. I'll rerun and report shortly.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists