[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1235494732.26788.256.camel@nimitz>
Date: Tue, 24 Feb 2009 08:58:52 -0800
From: Dave Hansen <dave@...ux.vnet.ibm.com>
To: Nick Piggin <nickpiggin@...oo.com.au>
Cc: Salman Qazi <sqazi@...gle.com>, linux-kernel@...r.kernel.org,
Ingo Molnar <mingo@...e.hu>,
Thomas Gleixner <tglx@...utronix.de>,
"H. Peter Anvin" <hpa@...or.com>, Andi Kleen <andi@...stfloor.org>,
Dave Hansen <haveblue@...ibm.com>,
Linus Torvalds <torvalds@...ux-foundation.org>
Subject: Re: Another Performance Regression in write() syscall
On Tue, 2009-02-24 at 19:47 +1100, Nick Piggin wrote:
> On Tuesday 24 February 2009 17:25:45 Dave Hansen wrote:
> > On Mon, 2009-02-23 at 22:05 -0800, Salman Qazi wrote:
> > > Analysis of profile data has led us to believe that the commit
> > > 3d733633a633065729c9e4e254b2e5442c00ef7e has caused a performance
> > > regression. This commit provides for tracking of writers so that read
> > > only bind mounts function correctly.
> > >
> > > We can verify this regression by applying the following patch to
> > > partially disable the above-mentioned commit and then running the fstime
> > > component of Unixbench. The settings used were 256 byte writes with
> > > MAX_BLOCK of 2000.
> >
> > I'm a bit surprised that write() is what is regressing. Unless I
> > screwed up, we do all the expensive accounting at open()/close() time.
> > Is this a test that gets run in parallel on multiple cpus?
>
> Don't forget touch_atime...
Yeah, that's a good point. Are we sure that's what is happening here,
though? That's one thing a profile would hopefully help with.
> Still, open/close isn't unimportant either.
Yeah, that's true. But, what I noticed was that all of the other
open/close activity masked out any overhead from mnt_want/drop_write()
since a big chunk of the overhead was just going and bringing the
vfsmount pieces into the cache.
> > Could you take a look at Nick's patches to speed this stuff up?
> >
> > http://thread.gmane.org/gmane.linux.file-systems/28186
> >
> > We may need to dust those off, although I'm still a bit worried about
> > the complexities of open-coding all the barriers.
>
> I really need to do something about trying to push them upstream again
> actually because we've got them in SLES11 tree.
Were the patches that you integrated any different from the ones you
posted a few months ago?
-- Dave
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists