lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 25 Sep 2009 14:45:03 +0800
From:	Wu Fengguang <fengguang.wu@...el.com>
To:	Dave Chinner <david@...morbit.com>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Li, Shaohua" <shaohua.li@...el.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"richard@....demon.co.uk" <richard@....demon.co.uk>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: regression in page writeback

On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > making any progress because once the queue congests, pdflush goes away.
> > > > 
> > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > to periodic/background writeback.
> > > 
> > > IMO, this is the wrong design. Background writeback should
> > > have higher CPU/scheduler priority than normal tasks. If there is
> > > sufficient dirty pages in the system for background writeback to
> > > be active, it should be running *now* to start as much IO as it can
> > > without being held up by other, lower priority tasks.
> > 
> > I'd say that an fsync from mutt or vi should be done at a higher prio
> > than a background streaming writer.
> 
> I don't think you caught everything I said - synchronous IO is
> un-throttled.

O_SYNC writes may be un-throttled in theory, however it seems to be
throttled in practice:

  generic_file_aio_write
    __generic_file_aio_write
      generic_file_buffered_write
        generic_perform_write
          balance_dirty_pages_ratelimited
    generic_write_sync

Do you mean some other code path?

> Background writeback should dump async IO to the elevator as fast as
> it can, then get the hell out of the way. If you've got a UP system,
> then the fsync can't be issued at the same time pdflush is running
> (same as right now), and if you've got a MP system then fsync can
> run at the same time.

I think you are right for system wide sync.

System wide sync seems to always wait for the queued bdi writeback
works to finish, which should be fine in terms of efficiency, except
that sync could end up do more works and even live lock.

> On the premise that sync IO is unthrottled and given that elevators
> queue and issue sync IO sperately to async writes, fsync latency
> would be entirely derived from the elevator queuing behaviour, not
> the CPU priority of pdflush.

It's not exactly CPU priority, but queue fullness priority.

fsync operations always use nonblocking=0, so in fact they _used to_
enjoy better priority than pdflush. Same is vmscan pageout, which
calls writepage directly. Both won't back off on congested bdi.

So when there comes fsync/pageout, they will always be served first.

> Look at it this way - it is the responsibility of pdflush to keep
> the elevator full of background IO. It is the responsibility of
> the elevator to ensure that background IO doesn't starve all other
> types of IO.

Agreed.

> If pdflush doesn't run because it can't get CPU time,
> then background IO does not get issued, and system performance
> suffers as a result.

pdflush is able to make 80% queue fullness, which should be enough
for efficient streaming IOs. Small random IOs may hurt a bit though.

Thanks,
Fengguang
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ