lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 23 Sep 2009 10:08:40 -0400
From:	Chris Mason <chris.mason@...cle.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Theodore Tso <tytso@....edu>, Jens Axboe <jens.axboe@...cle.com>,
	Christoph Hellwig <hch@...radead.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"linux-fsdevel@...r.kernel.org" <linux-fsdevel@...r.kernel.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	"jack@...e.cz" <jack@...e.cz>
Subject: Re: [PATCH 0/7] Per-bdi writeback flusher threads v20

On Wed, Sep 23, 2009 at 09:05:41AM +0800, Wu Fengguang wrote:

[ timeslice based limits on number of pages sent by the bdi threads ]

> > 
> > The reason I prefer the timeslice idea is that we don't need the
> > hardware to tell us how fast it is.  We just write for a while and move
> > on.
> 
> That makes sense.  Note that the triple (pages, page segments,
> submission time) can somehow adapt to hardware capabilities
> (and at least won't hurt fast arrays).
> 
> - max pages are set to large enough number for big arrays
> - max page segments could be based on the existing blk_queue_nonrot()
> - submission time = 1s, which is mainly a safeguard for slow devices
>   (ie. usb stick), to prevent one single inode from taking too much
>   time. This time limit has little performance impacts.
> 
> Possible merits are
> - these parameters are concrete ones and easy to handle
> - it's natural to implement related logics in the VFS level
> - file systems can do nothing to get most benefits
> 
> Also the (now necessary) per-invocation limit could be somehow
> eliminated when balance_dirty_pages() does not do IO itself.

I think there are probably a lot of good ways to improve on our single
max number of pages metric from today, but I'm worried about the
calculation time finding page segments.  The radix tree
isn't all that well suited to it.

But, if you've got a patch I'd be happy to run a comparison against it.
Jens' box will be better at showing any CPU cost to the radix walking.

-chris

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ