[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091118105246.GA19027@wotan.suse.de>
Date: Wed, 18 Nov 2009 11:52:46 +0100
From: Nick Piggin <npiggin@...e.de>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: john stultz <johnstul@...ibm.com>, Ingo Molnar <mingo@...e.hu>,
Darren Hart <dvhltc@...ibm.com>,
Clark Williams <williams@...hat.com>,
"Paul E. McKenney" <paulmck@...ibm.com>,
Dinakar Guniguntala <dino@...ibm.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: -rt dbench scalabiltiy issue
On Wed, Nov 18, 2009 at 11:19:14AM +0100, Thomas Gleixner wrote:
> Nick,
>
> On Wed, 18 Nov 2009, Nick Piggin wrote:
> > > So yes, on -rt, the overhead from lock contention is way way worse then
> > > any extra atomic ops. :)
> >
> > How about overhead for an uncontended lock? Ie. is the problem caused
> > because lock *contention* issues are magnified on -rt, or is it
> > because uncontended lock overheads are higher? Detailed callgraph
> > profiles and lockstat of +/-atomic case would be very interesting.
>
> In the uncontended case we have the overhead of calling might_sleep()
> before we acquire the lock with cmpxchg(). The uncontended unlock is a
> cmpxchg() as well.
OK well then you don't reduce atomic ops in the lookup/dput fastpaths
by protecting d_count with d_lock, so single threaded performance should
not hurt by using atomic_t here.
I'll keep this in mind. As I said, I still need to do some more work on
the fast path lookup and other single threaded performance. In the worst
case that mainline really doesn't like atomic_t there it probably isn't
hard to make some small wrappers for -rt.
> I don't think that this is significant overhead and we see real lock
> contention issues magnified by at least an order of magnitude.
Yeah I'm sure you're right. I'm just interested where it is coming from
in -rt.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists