[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1255743456.5135.162.camel@localhost.localdomain>
Date: Fri, 16 Oct 2009 18:37:36 -0700
From: john stultz <johnstul@...ibm.com>
To: paulmck@...ux.vnet.ibm.com
Cc: Ingo Molnar <mingo@...e.hu>, Thomas Gleixner <tglx@...utronix.de>,
Nick Piggin <npiggin@...e.de>, Darren Hart <dvhltc@...ibm.com>,
Clark Williams <williams@...hat.com>,
Dinakar Guniguntala <dino@...ibm.com>,
lkml <linux-kernel@...r.kernel.org>
Subject: Re: -rt dbench scalabiltiy issue
On Fri, 2009-10-16 at 18:03 -0700, john stultz wrote:
> On Fri, 2009-10-16 at 17:45 -0700, Paul E. McKenney wrote:
> > On Fri, Oct 16, 2009 at 01:05:19PM -0700, john stultz wrote:
> > > See http://lwn.net/Articles/354690/ for a bit of background here.
> > >
> > > I've been looking at scalability regressions in the -rt kernel. One easy
> > > place to see regressions is with the dbench benchmark. While dbench can
> > > be painfully noisy from run to run, it does clearly show some severe
> > > regressions with -rt.
> > >
> > > There's a chart in the article above that illustrates this, but here's
> > > some specific numbers on an 8-way box running dbench-3.04 as follows:
> > >
> > > ./dbench 8 -t 10 -D . -c client.txt 2>&1
> > >
> > > I ran both on an ext3 disk and a ramfs mounted directory.
> > >
> > > (Again, the numbers are VERY rough due to the run-to-run variance seen)
> > >
> > > ext3 ramfs
> > > 2.6.32-rc3: ~1800 MB/sec ~1600 MB/sec
> > > 2.6.31.2-rt13: ~300 MB/sec ~66 MB/sec
> > >
> > > Ouch. Similar to the charts in the LWN article.
> > >
> > > Dino pointed out that using lockstat with -rt, we can see the
> > > dcache_lock is fairly hot with the -rt kernel. One of the issues with
> > > the -rt tree is that the change from spinlocks to sleeping-spinlocks
> > > doesn't effect the un-contended case very much, but when there is
> > > contention on the lock, the overhead is much worse then with vanilla.
> > >
> > > And as noted at the realtime mini-conf, Ingo saw this dcache_lock
> > > bottleneck as well and suggested trying Nick Piggin's dcache_lock
> > > removal patches.
> > >
> > > So over the last week, I've ported Nick's fs-scale patches to -rt.
> > >
> > > Specifically the tarball found here:
> > > ftp://ftp.kernel.org/pub/linux/kernel/people/npiggin/patches/fs-scale/06102009.tar.gz
> > >
> > >
> > > Due to the 2.6.32 2.6.31-rt split, the port wasn't exactly straight
> > > forward, but I believe I managed to do a decent job. Once I had the
> > > patchset applied, building and booted, I eagerly ran dbench to see the
> > > new results, aaaaaand.....
> > >
> > > ext3 ramfs
> > > 2.6.31.2-rt13-nick: ~80 MB/sec ~126 MB/sec
> > >
> > >
> > > So yea, mixed bag there. The ramfs got a little bit better but not that
> > > much, and the ext3 numbers regressed further.
> >
> > OK, I will ask the stupid question... What happens if you run on ext2?
>
> Yep. That was next on my list. Basically its faster, but the regressions
> are similar % wise with each patchset.
>
> ext3 ext2
> 2.6.32-rc3: ~1800 MB/sec ~2900 MB/sec
> 2.6.31.2-rt13: ~300 MB/sec ~600 MB/sec
> 2.6.31.2-rt13-nick: ~80 MB/sec ~130 MB/sec
Additionally looking at the perf data, it does seem the dcache_lock is
the contention point w/ ext2 on -rt13, but with Nick's patch, the
contention still stays mostly in the dput/path_get functions. So it
seems its just been moved rather then eased with _my port_ of Nick's
patch (emphasis on "my port", since with nick's patch against mainline
there is no regression at all.. I don't want to drag Nick's patches
through the mud here :)
thanks
-john
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists