lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:	Tue, 10 May 2011 14:53:46 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Jan Kara <jack@...e.cz>, Andrew Morton <akpm@...ux-foundation.org>,
	Mel Gorman <mel@...ux.vnet.ibm.com>,
	Mel Gorman <mel@....ul.ie>,
	Itaru Kitayama <kitayama@...bb4u.ne.jp>,
	Minchan Kim <minchan.kim@...il.com>,
	Linux Memory Management List <linux-mm@...ck.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 6/6] writeback: refill b_io iff empty

On Tue, May 10, 2011 at 12:31:04PM +0800, Wu Fengguang wrote:
> On Fri, May 06, 2011 at 10:21:55PM +0800, Jan Kara wrote:
> > On Fri 06-05-11 13:29:55, Wu Fengguang wrote:
> > > On Fri, May 06, 2011 at 12:37:08AM +0800, Jan Kara wrote:
> > > > On Wed 04-05-11 15:39:31, Wu Fengguang wrote:
> > > > > To help understand the behavior change, I wrote the writeback_queue_io
> > > > > trace event, and found very different patterns between
> > > > > - vanilla kernel
> > > > > - this patchset plus the sync livelock fixes
> > > > > 
> > > > > Basically the vanilla kernel each time pulls a random number of inodes
> > > > > from b_dirty, while the patched kernel tends to pull a fixed number of
> > > > > inodes (enqueue=1031) from b_dirty. The new behavior is very interesting...
> > > >   This regularity is really strange. Did you have a chance to look more into
> > > > it? I find it highly unlikely that there would be exactly 1031 dirty inodes
> > > > in b_dirty list every time you call move_expired_inodes()...
> > > 
> > > Jan, I got some results for ext4. The total dd+tar+sync time is
> > > decreased from 177s to 167s. The other numbers are either raised or
> > > dropped.
> >   Nice, but what I was more curious about was to understand why you saw
> > enqueued=1031 all the time.
> 
> Maybe some unknown interactions with XFS? Attached is another trace
> with both writeback_single_inode and writeback_queue_io.

Perhaps because write throttling is limiting the number of files
being dirtied to match the number of files being cleaned? hence they
age at roughly the same rate as writeback is cleaning them?
Especially as most file are only a single page in size?

Or perhaps that is the rate at which IO completions are occurring
and updating the inode size and redirtying the inode? After all,
there are lots of inodes that are only state=I_DIRTY_SYNC and
wrote=0 in the traces around when it starts going to ~1000 inodes
per queue_io call....

Or maybe a combination of both?

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ