[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090527150459.GL11363@kernel.dk>
Date: Wed, 27 May 2009 17:05:00 +0200
From: Jens Axboe <jens.axboe@...cle.com>
To: Theodore Tso <tytso@....edu>
Cc: linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
chris.mason@...cle.com, david@...morbit.com, hch@...radead.org,
akpm@...ux-foundation.org, jack@...e.cz,
yanmin_zhang@...ux.intel.com, richard@....demon.co.uk,
damien.wyart@...e.fr
Subject: Re: [PATCH 0/11] Per-bdi writeback flusher threads v8
On Wed, May 27 2009, Theodore Tso wrote:
> Hi Jens,
>
> FYI, just for yucks I tried to merge your bdi patches and the ext4
> patch queue to make sure there were no patch conflicts, and started a
> quick regression run. I got the following soft lockup report when
> running "fsstress -d /mnt/tmp -s 12345 -S -p 20 -n 1000".
>
> I'll retry the test with your stock writeback-v8 git branch w/o any
> ext4 patches planned the next mere window mainline to see if I get the
> same soft lockup, but I thought I should give you an early heads up.
It looks like it's the writeback patches, from a quick glance. Did it
get stuck, or did it still make progress? Thanks for testing, please let
me know if it triggers with just wb-v8. Can you send me your .config,
too? I'm assuming no special ext4 mount options, right? I used ext4 as
well for testing, using -o barrier=0 for most of the runs.
Irregardless, I'll try this test and poke a bit at it and find out what
the issue is.
--
Jens Axboe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists