[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110508051113.GH2934@cucamonga.audible.transient.net>
Date: Sun, 8 May 2011 05:11:13 +0000
From: Jamie Heilman <jamie@...ible.transient.net>
To: Dave Chinner <david@...morbit.com>
Cc: linux-kernel@...r.kernel.org,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
Bruno Prémont <bonbons@...ux-vserver.org>,
xfs-masters@....sgi.com, xfs@....sgi.com,
Christoph Hellwig <hch@...radead.org>,
Alex Elder <aelder@....com>, Dave Chinner <dchinner@...hat.com>
Subject: Re: 2.6.39-rc3, 2.6.39-rc4: XFS lockup - regression since 2.6.38
Dave Chinner wrote:
> On Thu, May 05, 2011 at 12:26:13PM +1000, Dave Chinner wrote:
> > On Thu, May 05, 2011 at 10:21:26AM +1000, Dave Chinner wrote:
> > > On Wed, May 04, 2011 at 12:57:36AM +0000, Jamie Heilman wrote:
> > > > Dave Chinner wrote:
> > > > > OK, so the common elements here appears to be root filesystems
> > > > > with small log sizes, which means they are tail pushing all the
> > > > > time metadata operations are in progress. Definitely seems like a
> > > > > race in the AIL workqueue trigger mechanism. I'll see if I can
> > > > > reproduce this and cook up a patch to fix it.
> > > >
> > > > Is there value in continuing to post sysrq-w, sysrq-l, xfs_info, and
> > > > other assorted feedback wrt this issue? I've had it happen twice now
> > > > myself in the past week or so, though I have no reliable reproduction
> > > > technique. Just wondering if more data points will help isolate the
> > > > cause, and if so, how to be prepared to get them.
> > > >
> > > > For whatever its worth, my last lockup was while running
> > > > 2.6.39-rc5-00127-g1be6a1f with a preempt config without cgroups.
> > >
> > > Can you all try the patch below? I've managed to trigger a couple of
> > > xlog_wait() lockups in some controlled load tests. The lockups don't
> > > appear to occur with the following patch to he race condition in
> > > the AIL workqueue trigger.
> >
> > They are still there, just harder to hit.
> >
> > FWIW, I've also discovered that "echo 2 > /proc/sys/vm/drop_caches"
> > gets the system moving again because that changes the push target.
> >
> > I've found two more bugs, and now my test case is now reliably
> > reproducably a 5-10s pause at ~1M created 1byte files and then
> > hanging at about 1.25M files. So there's yet another problem lurking
> > that I need to get to the bottom of.
>
> Which, of course, was the real regression. The patch below has
> survived a couple of hours of testing, which fixes all 4 of the
> problems I found. Please test.
Well, 61 hours in now, and no lockups. I've written ~204GiB to my xfs
volumes in that time, much of which was audacity temp files which are
1037kB each, so not as metadata intensive as your test case, but it's
more or less what I'd been doing in the past when the lockups
happened. Looks pretty promising at this point.
--
Jamie Heilman http://audible.transient.net/~jamie/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists