[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110509075709.3c527fd2@pluto.restena.lu>
Date: Mon, 9 May 2011 07:57:09 +0200
From: Bruno Prémont <bonbons@...ux-vserver.org>
To: Dave Chinner <david@...morbit.com>
Cc: linux-kernel@...r.kernel.org,
Markus Trippelsdorf <markus@...ppelsdorf.de>,
xfs-masters@....sgi.com, xfs@....sgi.com,
Christoph Hellwig <hch@...radead.org>,
Alex Elder <aelder@....com>, Dave Chinner <dchinner@...hat.com>
Subject: Re: 2.6.39-rc3, 2.6.39-rc4: XFS lockup - regression since 2.6.38
On Thu, 5 May 2011 22:35:13 Bruno Prémont wrote:
> On Thu, 05 May 2011 Dave Chinner wrote:
> > On Thu, May 05, 2011 at 12:26:13PM +1000, Dave Chinner wrote:
> > > On Thu, May 05, 2011 at 10:21:26AM +1000, Dave Chinner wrote:
> > > > On Wed, May 04, 2011 at 12:57:36AM +0000, Jamie Heilman wrote:
> > > > > Dave Chinner wrote:
> > > > > > OK, so the common elements here appears to be root filesystems
> > > > > > with small log sizes, which means they are tail pushing all the
> > > > > > time metadata operations are in progress. Definitely seems like a
> > > > > > race in the AIL workqueue trigger mechanism. I'll see if I can
> > > > > > reproduce this and cook up a patch to fix it.
> > > > >
> > > > > Is there value in continuing to post sysrq-w, sysrq-l, xfs_info, and
> > > > > other assorted feedback wrt this issue? I've had it happen twice now
> > > > > myself in the past week or so, though I have no reliable reproduction
> > > > > technique. Just wondering if more data points will help isolate the
> > > > > cause, and if so, how to be prepared to get them.
> > > > >
> > > > > For whatever its worth, my last lockup was while running
> > > > > 2.6.39-rc5-00127-g1be6a1f with a preempt config without cgroups.
> > > >
> > > > Can you all try the patch below? I've managed to trigger a couple of
> > > > xlog_wait() lockups in some controlled load tests. The lockups don't
> > > > appear to occur with the following patch to he race condition in
> > > > the AIL workqueue trigger.
> > >
> > > They are still there, just harder to hit.
> > >
> > > FWIW, I've also discovered that "echo 2 > /proc/sys/vm/drop_caches"
> > > gets the system moving again because that changes the push target.
> > >
> > > I've found two more bugs, and now my test case is now reliably
> > > reproducably a 5-10s pause at ~1M created 1byte files and then
> > > hanging at about 1.25M files. So there's yet another problem lurking
> > > that I need to get to the bottom of.
> >
> > Which, of course, was the real regression. The patch below has
> > survived a couple of hours of testing, which fixes all 4 of the
> > problems I found. Please test.
>
> Successfully survives my 2-hours session of today. Will continue testing
> during week-end and see if it also survives the longer whole-day sessions.
>
> Will report results at end of week-end (or earlier in case of trouble).
Also survived the whole week-end (at least twice 10 hours) with
normal desktop work as well as a few hours of software compilation.
(without the patch it would probably have frozen at least twice a day)
So looks really good!
Thanks,
Bruno
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists