[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110602095424.GA5718@quack.suse.cz>
Date: Thu, 2 Jun 2011 11:54:24 +0200
From: Jan Kara <jack@...e.cz>
To: Ted Ts'o <tytso@....edu>
Cc: Jan Kara <jack@...e.cz>, Manish Katiyar <mkatiyar@...il.com>,
linux-ext4@...r.kernel.org, mfasheh@...e.com, jlbec@...lplan.org
Subject: Re: [PATCH v2 2/3] jbd2: Add extra parameter in
start_this_handle() to control allocation flags.
On Tue 31-05-11 18:27:20, Ted Tso wrote:
> On Tue, May 31, 2011 at 01:22:53PM +0200, Jan Kara wrote:
> >
> > The problem is that with ext4, we need i_mutex in io completion path to
> > end page writeback. So we cannot do GFP_KERNEL allocation whenever we hold
> > i_mutex because mm might wait in direct reclaim for IO to complete and that
> > cannot happen until we release i_mutex.
>
> OK, maybe I'm being dense, but I'm not seeing it. I see where we need
> i_mutex on the ext4_da_writepages() codepath, but that's never used
> for direct reclaim. Direct reclaim only calls ext4_writepage(), and
> that doesn't seem to try to grab i_mutex as near as I can tell. Am I
> missing something?
What happens is that direct reclaim sometimes does
wait_on_page_writeback() (e.g. shrink_page_list()) or it explicitely waits
for NR_WRITEBACK statistics to go below some threshold
(throttle_vm_writeout()). And that is deadlockable if we hold i_mutex while
doing this because we may need i_mutex to actually move the page from
PageWriteback state...
As I'm saying this, I've realized ext4 has this problem also with
stable-pages patches because there we can wait for PageWriteback in
grab_cache_page_write_begin() when we also hold i_mutex. So I think we'll
have to come up with a way to convert unwritten extents without having to
hold i_mutex. That's going to be interesting.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Powered by blists - more mailing lists