lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091007122118.GA20855@localhost>
Date:	Wed, 7 Oct 2009 20:21:18 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Nick Piggin <npiggin@...e.de>
Cc:	David Howells <dhowells@...hat.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Theodore Tso <tytso@....edu>,
	Christoph Hellwig <hch@...radead.org>,
	Dave Chinner <david@...morbit.com>,
	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Li, Shaohua" <shaohua.li@...el.com>,
	Myklebust Trond <Trond.Myklebust@...app.com>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
	Jan Kara <jack@...e.cz>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH 14/45] writeback: quit on wrap for .range_cyclic (afs)

On Wed, Oct 07, 2009 at 07:23:02PM +0800, Nick Piggin wrote:
> On Wed, Oct 07, 2009 at 06:47:11PM +0800, Wu Fengguang wrote:
> > On Wed, Oct 07, 2009 at 06:21:30PM +0800, Nick Piggin wrote:
> > > On Wed, Oct 07, 2009 at 11:17:06AM +0100, David Howells wrote:
> > > > Wu Fengguang <fengguang.wu@...el.com> wrote:
> > > > 
> > > > > Convert wbc.range_cyclic to new behavior: when past EOF, abort writeback
> > > > > of the inode, which instructs writeback_single_inode() to delay it for
> > > > > a while if necessary.
> > > > > 
> > > > > It removes one inefficient .range_cyclic IO pattern when writeback_index
> > > > > wraps:
> > > > > 	submit [10000-10100], (wrap), submit [0-100]
> > > > > In which the submitted pages may be consisted of two distant ranges.
> > > > > 
> > > > > It also prevents submitting pointless IO for busy overwriters.
> > > > > 
> > > > > CC: David Howells <dhowells@...hat.com>
> > > > > Signed-off-by: Wu Fengguang <fengguang.wu@...el.com>
> > > > 
> > > > Acked-by: David Howells <dhowells@...hat.com>
> > > 
> > > I don't see why. Then the inode is given less write bandwidth than
> > > those which don't wrap (or wrap on "nice" boundaries).
> > 
> > The "return on wrapped" behavior itself only offers a natural seek
> > boundary to the upper layer.  It's mainly the "whether to delay"
> > policy that will affect (overall) bandwidth.
> > 
> > If we choose to not sleep, and to go on with other inodes and then
> > back to this inode, no bandwidth will be lost.
> > 
> > If we have done work with other inodes (if any), and choose to sleep
> > for a while before restarting this inode, then we could lose bandwidth.
> > The plus side is, we possibly avoid submitting extra IO if this inode
> > is being busy overwritten. So it's a tradeoff.
> > 
> > The behavior after this patchset is, to keep busy as long as we can
> > write any pages (in patch 38/45). So we still opt for bandwidth :)
> 
> No I mean bandwidth fairness between inodes.

I guess it's the old semantics that has bandwidth fairness problem :)

Imagine write chunk size is 4MB, and inode A/B with size 6MB/8MB.

The old semantics will have write sequence

        4MB for A; 4MB for B; other inodes;
        4MB for A; 4MB for B; other inodes;
        4MB for A; 4MB for B; other inodes;

while the new sequence would be

        4MB for A; 4MB for B; other inodes;
        2MB for A; 4MB for B; other inodes;
        4MB for A; 4MB for B; other inodes;
        2MB for A; 4MB for B; other inodes;

On average, each page in A used to get more write chance than B's.
Now with no-wrap, A and B's pages have the same chance to be writeback.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ