[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20120309095135.GC21038@quack.suse.cz>
Date: Fri, 9 Mar 2012 10:51:35 +0100
From: Jan Kara <jack@...e.cz>
To: Fengguang Wu <fengguang.wu@...el.com>
Cc: Artem Bityutskiy <dedekind1@...il.com>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Kara <jack@...e.cz>, Greg Thelen <gthelen@...gle.com>,
Ying Han <yinghan@...gle.com>,
"hannes@...xchg.org" <hannes@...xchg.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
Rik van Riel <riel@...hat.com>, Mel Gorman <mgorman@...e.de>,
Minchan Kim <minchan.kim@...il.com>,
Linux Memory Management List <linux-mm@...ck.org>,
LKML <linux-kernel@...r.kernel.org>,
Adrian Hunter <adrian.hunter@...el.com>
Subject: Re: [PATCH 5/9] writeback: introduce the pageout work
Hello,
On Thu 08-03-12 23:31:13, Wu Fengguang wrote:
> On Wed, Mar 07, 2012 at 05:48:21PM +0200, Artem Bityutskiy wrote:
> > On Sat, 2012-03-03 at 21:55 +0800, Fengguang Wu wrote:
> > > 13 1125 /c/linux/fs/ubifs/file.c <<do_truncation>> <===== deadlockable
> >
> > Sorry, but could you please explain once again how the deadlock may
> > happen?
>
> Sorry I confused ubifs do_truncation() with the truncate_inode_pages()
> that may be called from iput().
>
> The once suspected deadlock scheme is when the flusher thread calls
> the final iput:
>
> flusher thread
> iput_final
> <some ubifs function>
> ubifs_budget_space
> shrink_liability
> writeback_inodes_sb
> writeback_inodes_sb_nr
> bdi_queue_work
> wait_for_completion => end up waiting for the flusher itself
>
> However I cannot find any ubifs functions to form the above loop, so
> ubifs should be safe for now.
Yeah, me neither but I also failed to find a place where
ubifs_evict_inode() truncates inode space when deleting the inode... Artem?
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists