[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110504073931.GA22675@localhost>
Date: Wed, 4 May 2011 15:39:31 +0800
From: Wu Fengguang <fengguang.wu@...el.com>
To: Andrew Morton <akpm@...ux-foundation.org>
Cc: Jan Kara <jack@...e.cz>, Mel Gorman <mel@...ux.vnet.ibm.com>,
Mel Gorman <mel@....ul.ie>, Dave Chinner <david@...morbit.com>,
Itaru Kitayama <kitayama@...bb4u.ne.jp>,
Minchan Kim <minchan.kim@...il.com>,
Linux Memory Management List <linux-mm@...ck.org>,
"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] writeback: refill b_io iff empty
To help understand the behavior change, I wrote the writeback_queue_io
trace event, and found very different patterns between
- vanilla kernel
- this patchset plus the sync livelock fixes
Basically the vanilla kernel each time pulls a random number of inodes
from b_dirty, while the patched kernel tends to pull a fixed number of
inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
The attached test script runs 1 dd and 1 tar concurrently on XFS,
whose output can be found at the start of the trace files. The
elapsed time is 289s for vanilla kernel and 270s for patched kernel.
Thanks,
Fengguang
View attachment "writeback-trace-queue_io.patch" of type "text/x-diff" (3107 bytes)
Download attachment "test-tar-dd.sh" of type "application/x-sh" (703 bytes)
View attachment "trace-2.6.39-rc3-dyn-expire+" of type "text/plain" (85738 bytes)
View attachment "trace-2.6.39-rc3" of type "text/plain" (50539 bytes)
Powered by blists - more mailing lists