[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20101028195451.GB28126@thunk.org>
Date: Thu, 28 Oct 2010 15:54:51 -0400
From: Ted Ts'o <tytso@....edu>
To: Eric Sandeen <sandeen@...hat.com>
Cc: Markus Trippelsdorf <markus@...ppelsdorf.de>,
sedat.dilek@...il.com, LKML <linux-kernel@...r.kernel.org>,
linux-ext4@...r.kernel.org, sfr@...b.auug.org.au,
Arnd Bergmann <arnd@...db.de>,
Avinash Kurup <kurup.avinash@...il.com>
Subject: Re: [next-20101038] Call trace in ext4
On Thu, Oct 28, 2010 at 02:36:08PM -0500, Eric Sandeen wrote:
> Ted Ts'o wrote:
> > On Thu, Oct 28, 2010 at 02:01:18PM -0400, Ted Ts'o wrote:
> >> On Thu, Oct 28, 2010 at 07:52:21PM +0200, Markus Trippelsdorf wrote:
> >>> The same BUG (inode.c:2721) happend here today running latest vanilla
> >>> git. There is nothing in my logs unfortunately, but I shot a photo of
> >>> the trace (see attachment).
> >> I see, it's the page_buffers() call which is triggering. Looking into
> >> it...
> >
> > Can folks let me know if this fixes the problem?
>
> Ted, any idea what caused the change in behavior here?
The bug was caused by commit a42afc5f56: ext4: simplify ext4_writepage()
I somehow managed to use page_buffers(page) instead of
page_has_buffers(page) when cleaning up ext4_writpage(). It's not
something I can trigger in xfstests, and so on my todo list is to
create a test case that can trigger this issue.
The immediate trigger was journal_submit_inode_data_buffers() getting
called in data=ordered mode, which ends up calling
generic_writepages() which iterates over all of the dirty pages in the
inode and calls ext4_writepage() on them. If we're under enough
memory pressure that the buffer heads get stripped from the page
before the journal commit happens (by default on a 5 second interval),
then we'll end up calling page_buffers() on a page with the buffer
heads stripped, and the fact that I had somehow changed
page_has_buffers() to page_buffers(), would cause a BUG_ON.
My standard test setup runs xfstests using 768k of memory on a
dual-CPU system, and apparently fsstress wasn't enough to trigger the
case where the bh's get stripped from the page, even with a relatively
small memory configuration. Which is surprising to me, but one good
thing about this bug is that it has pointed out a gap in my testing
strategy.
To address this, we need to either (a) create tests that generate
enough memory pressure so this happens, or (b) we need to have some
hooks (maybe some magic ioctl's) that emulate this by forcibly
detaching bh's from some random number of pages.
- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists