[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110506142155.GD18982@quack.suse.cz>
Date: Fri, 6 May 2011 16:21:55 +0200
From: Jan Kara <jack@...e.cz>
To: Wu Fengguang <fengguang.wu@...el.com>
Cc: Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
Mel Gorman <mel@...ux.vnet.ibm.com>,
Mel Gorman <mel@....ul.ie>, Dave Chinner <david@...morbit.com>,
Itaru Kitayama <kitayama@...bb4u.ne.jp>,
Minchan Kim <minchan.kim@...il.com>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] writeback: refill b_io iff empty
On Fri 06-05-11 13:29:55, Wu Fengguang wrote:
> On Fri, May 06, 2011 at 12:37:08AM +0800, Jan Kara wrote:
> > On Wed 04-05-11 15:39:31, Wu Fengguang wrote:
> > > To help understand the behavior change, I wrote the writeback_queue_io
> > > trace event, and found very different patterns between
> > > - vanilla kernel
> > > - this patchset plus the sync livelock fixes
> > >
> > > Basically the vanilla kernel each time pulls a random number of inodes
> > > from b_dirty, while the patched kernel tends to pull a fixed number of
> > > inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
> > This regularity is really strange. Did you have a chance to look more into
> > it? I find it highly unlikely that there would be exactly 1031 dirty inodes
> > in b_dirty list every time you call move_expired_inodes()...
>
> Jan, I got some results for ext4. The total dd+tar+sync time is
> decreased from 177s to 167s. The other numbers are either raised or
> dropped.
Nice, but what I was more curious about was to understand why you saw
enqueued=1031 all the time. BTW, I'd suppose that the better performance
numbers come from sync using page tagging, right? Because from the traces
it seems that not much IO is going on until sync is called. And I expect
that tagging can bring you some performance because now you sync a file in
one big sweep instead of 4MB chunks...
> 1902.672610: writeback_queue_io: older=4296543506 age=30000 enqueue=0
> 1905.209570: writeback_queue_io: older=4296546051 age=30000 enqueue=0
> 1907.294936: writeback_queue_io: older=4296548143 age=30000 enqueue=0
> 1909.607301: writeback_queue_io: older=4296550462 age=30000 enqueue=0
> 1912.290627: writeback_queue_io: older=4296553154 age=30000 enqueue=0
> 1914.331197: writeback_queue_io: older=4296555201 age=30000 enqueue=0
> 1927.275838: writeback_queue_io: older=4296568186 age=30000 enqueue=0
> 1927.277794: writeback_queue_io: older=4296568188 age=30000 enqueue=0
> 1927.279504: writeback_queue_io: older=4296568189 age=30000 enqueue=0
> 1927.279923: writeback_queue_io: older=4296568190 age=30000 enqueue=0
> 1929.981734: writeback_queue_io: older=4296600898 age=2 enqueue=13227
> 1932.840150: writeback_queue_io: older=4296600898 age=2869 enqueue=0
> 1932.840781: writeback_queue_io: older=4296603768 age=0 enqueue=0
> 1932.840787: writeback_queue_io: older=4296573768 age=30000 enqueue=0
> 1932.991596: writeback_queue_io: older=4296603919 age=0 enqueue=1
> 1937.975765: writeback_queue_io: older=4296578919 age=30000 enqueue=0
> 1942.960305: writeback_queue_io: older=4296583919 age=30000 enqueue=0
> 1947.944925: writeback_queue_io: older=4296588919 age=30000 enqueue=0
> 1952.929427: writeback_queue_io: older=4296593919 age=30000 enqueue=0
> 1957.914031: writeback_queue_io: older=4296598919 age=30000 enqueue=0
> 1962.898507: writeback_queue_io: older=4296603919 age=30000 enqueue=1
> 1962.898518: writeback_queue_io: older=4296603919 age=30000 enqueue=0
OK, so now enqueue numbers look like what I'd expect. I'm relieved :)
Thanks for running the tests.
Honza
--
Jan Kara <jack@...e.cz>
SUSE Labs, CR
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists