lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090928010700.GE9464@discord.disaster>
Date:	Mon, 28 Sep 2009 11:07:00 +1000
From:	Dave Chinner <david@...morbit.com>
To:	Wu Fengguang <fengguang.wu@...el.com>
Cc:	Chris Mason <chris.mason@...cle.com>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"Li, Shaohua" <shaohua.li@...el.com>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"richard@....demon.co.uk" <richard@....demon.co.uk>,
	"jens.axboe@...cle.com" <jens.axboe@...cle.com>
Subject: Re: regression in page writeback

On Fri, Sep 25, 2009 at 02:45:03PM +0800, Wu Fengguang wrote:
> On Fri, Sep 25, 2009 at 01:04:13PM +0800, Dave Chinner wrote:
> > On Thu, Sep 24, 2009 at 08:38:20PM -0400, Chris Mason wrote:
> > > On Fri, Sep 25, 2009 at 10:11:17AM +1000, Dave Chinner wrote:
> > > > On Thu, Sep 24, 2009 at 11:15:08AM +0800, Wu Fengguang wrote:
> > > > > On Wed, Sep 23, 2009 at 10:00:58PM +0800, Chris Mason wrote:
> > > > > > The only place that actually honors the congestion flag is pdflush.
> > > > > > It's trivial to get pdflush backed up and make it sit down without
> > > > > > making any progress because once the queue congests, pdflush goes away.
> > > > > 
> > > > > Right. I guess that's more or less intentional - to give lowest priority
> > > > > to periodic/background writeback.
> > > > 
> > > > IMO, this is the wrong design. Background writeback should
> > > > have higher CPU/scheduler priority than normal tasks. If there is
> > > > sufficient dirty pages in the system for background writeback to
> > > > be active, it should be running *now* to start as much IO as it can
> > > > without being held up by other, lower priority tasks.
> > > 
> > > I'd say that an fsync from mutt or vi should be done at a higher prio
> > > than a background streaming writer.
> > 
> > I don't think you caught everything I said - synchronous IO is
> > un-throttled.
> 
> O_SYNC writes may be un-throttled in theory, however it seems to be
> throttled in practice:
> 
>   generic_file_aio_write
>     __generic_file_aio_write
>       generic_file_buffered_write
>         generic_perform_write
>           balance_dirty_pages_ratelimited
>     generic_write_sync
> 
> Do you mean some other code path?

In the context of the setup I was talking about, I meant is that sync
IO _should_ be unthrottled because it is self-throttling by it's
very nature. The current code makes no differentiation between the
two.

> > Background writeback should dump async IO to the elevator as fast as
> > it can, then get the hell out of the way. If you've got a UP system,
> > then the fsync can't be issued at the same time pdflush is running
> > (same as right now), and if you've got a MP system then fsync can
> > run at the same time.
> 
> I think you are right for system wide sync.
> 
> System wide sync seems to always wait for the queued bdi writeback
> works to finish, which should be fine in terms of efficiency, except
> that sync could end up do more works and even live lock.
> 
> > On the premise that sync IO is unthrottled and given that elevators
> > queue and issue sync IO sperately to async writes, fsync latency
> > would be entirely derived from the elevator queuing behaviour, not
> > the CPU priority of pdflush.
> 
> It's not exactly CPU priority, but queue fullness priority.

That's exactly what I implied. The elevator manages the
queue fullness and when it decides when to block background or
foreground writes. The problem is, the elevator can't make a sane
scheduling decision because it can't tell the difference between
async and sync IO because we don't propagate that information to
THE Block layer from the VFS.

We have all the smarts in the block layer interface to distinguish
between sync and async IO and the elevators do smart stuff with this
information. But by throwing away that information at the VFS level,
we hamstring the elevator scheduler because it never sees any
"synchronous" write IO for data writes. Hence any synchronous data
write gets stuck in the same queue with all the background stuff
and doesn't get priority.

Hence right now if you issue an fsync or pageout, it's a crap shoot
as to whether the elevator will schedule it first or last behind
other IO. The fact that they then ignore congestion is relying on a
side effect to stop background writeback and allow the fsync to
monopolise the elevator. It is not predictable and hence IO patterns
under load will change all the time regardless of whether the system
is in a steady state or not.

IMO there are architectural failings from top to bottom in the
writeback stack - while people are interested in fixing stuff, I
figured that they should be pointed out to give y'all something to
think about...

> fsync operations always use nonblocking=0, so in fact they _used to_
> enjoy better priority than pdflush. Same is vmscan pageout, which
> calls writepage directly. Both won't back off on congested bdi.
> 
> So when there comes fsync/pageout, they will always be served first.

pageout is so horribly inefficient from an IO perspective it is not
funny. It is one of the reasons Linux sucks so much when under
memory pressure. It basically causes the system to do random 4k
writeback of dirty pages (and lumpy reclaim can make it
synchronous!). 

pageout needs an enema, and preferably it should defer to background
writeback to clean pages. background writeback will clean pages
much, much faster than the random crap that pageout spews at the
disk right now.

Given that I can basically lock up my 2.6.30-based laptop for 10-15
minutes at a time with the disk running flat out in low memory
situations simply by starting to copy a large file(*), I think that
the way we currently handle dirty page writeback needs a bit of a
rethink.

(*) I had this happen 4-5 times last week moving VM images around on
my laptop, and it involved the Linux VM switching between pageout
and swapping to make more memory available while the copy was was
hammering the same drive with dirty pages from foreground writeback.
It made for extremely fragmented files when the machine finally
recovered because of the non-sequential writeback patterns on the
single file being copied.  You can't tell me that this is sane,
desirable behaviour, and this is the sort of problem that I want
sorted out. I don't beleive it can be fixed by maintaining the
number of uncoordinated, competing writeback mechanisms we currently
have.

> Small random IOs may hurt a bit though.

They *always* hurt, and under load, that appears to be the common IO
pattern that Linux is generating....

Cheers,

Dave.
-- 
Dave Chinner
david@...morbit.com
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ