lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090924073615.GA21733@localhost>
Date:	Thu, 24 Sep 2009 15:36:15 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Arjan van de Ven <arjan@...radead.org>
Cc:	"Li, Shaohua" <shaohua.li@...el.com>,
	lkml <linux-kernel@...r.kernel.org>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Chris Mason <chris.mason@...cle.com>, Jan Kara <jack@...e.cz>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>
Subject: Re: [RFC] page-writeback: move indoes from one superblock together

On Thu, Sep 24, 2009 at 03:29:35PM +0800, Arjan van de Ven wrote:
> On Thu, 24 Sep 2009 15:14:15 +0800
> Wu Fengguang <fengguang.wu@...el.com> wrote:
> 
> > On Thu, Sep 24, 2009 at 02:54:20PM +0800, Li, Shaohua wrote:
> > > __mark_inode_dirty adds inode to wb dirty list in random order. If
> > > a disk has several partitions, writeback might keep spindle moving
> > > between partitions. To reduce the move, better write big chunk of
> > > one partition and then move to another. Inodes from one fs usually
> > > are in one partion, so idealy move indoes from one fs together
> > > should reduce spindle move. This patch tries to address this.
> > > Before per-bdi writeback is added, the behavior is write indoes
> > > from one fs first and then another, so the patch restores previous
> > > behavior. The loop in the patch is a bit ugly, should we add a
> > > dirty list for each superblock in bdi_writeback?
> > > 
> > > Test in a two partition disk with attached fio script shows about
> > > 3% ~ 6% improvement.
> > 
> > Reviewed-by: Wu Fengguang <fengguang.wu@...el.com>
> > 
> > Good idea! The optimization looks good to me, it addresses one
> > weakness of per-bdi writeback.
> > 
> > But one problem is, Jan Kara and me are planning to remove b_io and
> > hence this move_expired_inodes() function. Not sure how to do this
> > optimization without b_io.
> > 
> > > Signed-off-by: Shaohua Li <shaohua.li@...el.com>
> > > 
> > > diff --git a/fs/fs-writeback.c b/fs/fs-writeback.c
> > > index 8e1e5e1..fc87730 100644
> > > --- a/fs/fs-writeback.c
> > > +++ b/fs/fs-writeback.c
> > > @@ -324,13 +324,29 @@ static void move_expired_inodes(struct
> > > list_head *delaying_queue, struct list_head *dispatch_queue,
> > >  				unsigned long *older_than_this)
> > >  {
> > > +	LIST_HEAD(tmp);
> > > +	struct list_head *pos, *node;
> > > +	struct super_block *sb;
> > > +	struct inode *inode;
> > > +
> > >  	while (!list_empty(delaying_queue)) {
> > > -		struct inode *inode =
> > > list_entry(delaying_queue->prev,
> > > -						struct inode,
> > > i_list);
> > > +		inode = list_entry(delaying_queue->prev, struct
> > > inode, i_list); if (older_than_this &&
> > >  		    inode_dirtied_after(inode, *older_than_this))
> > >  			break;
> > > -		list_move(&inode->i_list, dispatch_queue);
> > > +		list_move(&inode->i_list, &tmp);
> > > +	}
> > > +
> > > +	/* Move indoes from one superblock together */
> > > +	while (!list_empty(&tmp)) {
> > > +		inode = list_entry(tmp.prev, struct inode, i_list);
> > > +		sb = inode->i_sb;
> > > +		list_for_each_prev_safe(pos, node, &tmp) {
> > 
> > We are in spin lock, so not necessary to use the safe version?
> > 
> 
> safe is needed for list walks that remove entries from the list
> has nothing to do with locking ;-)

Ah yes, thanks!

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ