[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100729150413.GD12690@quack.suse.cz>
Date: Thu, 29 Jul 2010 17:04:13 +0200
From: Jan Kara <jack@...e.cz>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Andrew Morton <akpm@...ux-foundation.org>,
LKML <linux-kernel@...r.kernel.org>, Jan Kara <jack@...e.cz>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
Dave Chinner <david@...morbit.com>,
Chris Mason <chris.mason@...cle.com>,
Nick Piggin <npiggin@...e.de>, Rik van Riel <riel@...hat.com>,
Johannes Weiner <hannes@...xchg.org>,
Christoph Hellwig <hch@...radead.org>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
KOSAKI Motohiro <kosaki.motohiro@...fujitsu.com>,
Andrea Arcangeli <aarcange@...hat.com>,
Mel Gorman <mel@....ul.ie>, Minchan Kim <minchan.kim@...il.com>
Subject: Re: [PATCH 1/5] writeback: introduce wbc.for_sync to cover the two
sync stages
On Thu 29-07-10 19:51:43, Wu Fengguang wrote:
> The sync() is performed in two stages: the WB_SYNC_NONE sync and
> the WB_SYNC_ALL sync. It is necessary to tag both stages with
> wbc.for_sync, so as to prevent either of them being livelocked.
>
> The basic livelock scheme will be based on the sync_after timestamp.
> Inodes dirtied after that won't be queued for IO. The timestamp could be
> recorded as early as the sync() time, this patch lazily sets it in
> writeback_inodes_sb()/sync_inodes_sb(). This will stop livelock, but
> may do more work than necessary.
>
> Note that writeback_inodes_sb() is called by not only sync(), they
> are treated the same because the other callers need the same livelock
> prevention.
OK, but the patch does nothing, doesn't it? I'd prefer if the fields
you introduce were actually used in this patch.
Honza
> CC: Jan Kara <jack@...e.cz>
> Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> ---
> fs/fs-writeback.c | 21 ++++++++++++---------
> include/linux/writeback.h | 1 +
> 2 files changed, 13 insertions(+), 9 deletions(-)
>
> --- linux-next.orig/fs/fs-writeback.c 2010-07-28 17:05:17.000000000 +0800
> +++ linux-next/fs/fs-writeback.c 2010-07-28 21:21:31.000000000 +0800
> @@ -36,6 +36,8 @@ struct wb_writeback_work {
> long nr_pages;
> struct super_block *sb;
> enum writeback_sync_modes sync_mode;
> + unsigned long sync_after;
> + unsigned int for_sync:1;
> unsigned int for_kupdate:1;
> unsigned int range_cyclic:1;
> unsigned int for_background:1;
> @@ -1086,20 +1090,17 @@ static void wait_sb_inodes(struct super_
> */
> void writeback_inodes_sb(struct super_block *sb)
> {
> - unsigned long nr_dirty = global_page_state(NR_FILE_DIRTY);
> - unsigned long nr_unstable = global_page_state(NR_UNSTABLE_NFS);
> DECLARE_COMPLETION_ONSTACK(done);
> struct wb_writeback_work work = {
> .sb = sb,
> .sync_mode = WB_SYNC_NONE,
> + .for_sync = 1,
> + .sync_after = jiffies,
> .done = &done,
> };
>
> WARN_ON(!rwsem_is_locked(&sb->s_umount));
>
> - work.nr_pages = nr_dirty + nr_unstable +
> - (inodes_stat.nr_inodes - inodes_stat.nr_unused);
> -
> bdi_queue_work(sb->s_bdi, &work);
> wait_for_completion(&done);
> }
> @@ -1137,6 +1138,8 @@ void sync_inodes_sb(struct super_block *
> struct wb_writeback_work work = {
> .sb = sb,
> .sync_mode = WB_SYNC_ALL,
> + .for_sync = 1,
> + .sync_after = jiffies,
> .nr_pages = LONG_MAX,
> .range_cyclic = 0,
> .done = &done,
> --- linux-next.orig/include/linux/writeback.h 2010-07-28 17:05:17.000000000 +0800
> +++ linux-next/include/linux/writeback.h 2010-07-28 21:24:54.000000000 +0800
> @@ -48,6 +48,7 @@ struct writeback_control {
> unsigned encountered_congestion:1; /* An output: a queue is full */
> unsigned for_kupdate:1; /* A kupdate writeback */
> unsigned for_background:1; /* A background writeback */
> + unsigned for_sync:1; /* A writeback for sync */
> unsigned for_reclaim:1; /* Invoked from the page allocator */
> unsigned range_cyclic:1; /* range_start is cyclic */
> };
>
>
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists