lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Sep 2009 10:11:17 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Li, Shaohua" <shaohua.li@...el.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"richard@....demon.co.uk" <richard@....demon.co.uk>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: regression in page writeback

On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > The only place that actually honors the congestion flag is pdflush.
> > It's trivial to get pdflush backed up and make it sit down without
> > making any progress because once the queue congests, pdflush goes away.
> 
> Right. I guess that's more or less intentional - to give lowest priority
> to periodic/background writeback.

IMO, this is the wrong design. Background writeback should
have higher CPU/scheduler priority than normal tasks. If there is
sufficient dirty pages in the system for background writeback to
be active, it should be running *now* to start as much IO as it can
without being held up by other, lower priority tasks.

Cleaning pages is important to keeping the system running smoothly.
Given that IO takes time to clean pages, it is therefore important
to issue as much as possible as quickly as possible without delays
before going back to sleep. Delaying issue of the IO or doing
sub-optimal issue simply reduces performance of the system because
it takes longer to clean the same number of dirty pages.

> > Nothing stops other procs from keeping the queue congested forever.
> > This can only be fixed by making everyone wait for congestion, at which
> > point we might as well wait for requests.
> 
> Yes. That gives everyone somehow equal opportunity, this is a policy change
> that may lead to interesting effects, as well as present a challenge to
> get_request_wait(). That said, I'm not against the change to a wait queue
> in general.

If you block all threads doing _writebehind caching_ (synchronous IO
is self-throttling) to the same BDI on the same queue as the bdi
flusher then when congestion clears the higher priority background
flusher thread should run first and issue more IO.  This should
happen as a natural side-effect of our scheduling algorithms and it
gives preference to efficient background writeback over in-efficient
foreground writeback. Indeed, with this approach we can even avoid
foreground writeback altogether...

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ