lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090908180601.GN22901@mit.edu>
Date:	Tue, 8 Sep 2009 14:06:01 -0400
From:	Theodore Tso <tytso@....edu>
To:	Chris Mason <chris.mason@...cle.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Artem Bityutskiy <dedekind1@...il.com>,
	Jens Axboe <jens.axboe@...cle.com>,
	linux-kernel@...r.kernel.org, linux-fsdevel@...r.kernel.org,
	david@...morbit.com, hch@...radead.org, akpm@...ux-foundation.org,
	jack@...e.cz
Subject: Re: [PATCH 8/8] vm: Add an tuning knob for vm.max_writeback_mb

On Tue, Sep 08, 2009 at 12:29:36PM -0400, Chris Mason wrote:
> > 
> > Clearly the current limit isn't sufficient for some people,
> >  - xfs/btrfs seem generally stuck in balance_dirty_pages()'s
> > congestion_wait()
> >  - ext4 generates inconveniently small extents
> 
> This is actually two different side of the same problem.  The filesystem
> knows that bytes 0-N in the file are setup for delayed allocation.
> Writepage is called on byte 0, and now the filesystem gets to decide how
> big an extent to make.
> 
> It could decide to make an extent based on the total number of bytes
> under delayed allocation, and hope the caller of writepage will be kind
> enough to send down the pages contiguously afterward (xfs), or it could
> make a smaller extent based on something closer to the total number of
> bytes this particular writepages() call plans on writing (I guess what
> ext4 is doing).
>
> Either way, if pdflush or the bdi thread or whoever ends up switching to
> another file during a big streaming write, the end result is that we
> fragment.  We may fragment the file (ext4) or we may fragment the
> writeback (xfs), but the end result isn't good.

Yep; the question is whether we want to fragment the read operation in
the future (ext4) or write operation now (XFS).   

> > Now, suppose it were to do something useful, I'd think we'd want to
> > limit write-out to whatever it takes so saturate the BDI.
> 
> If we don't want a blanket increase, I'd suggest that we just give the
> FS a way to say: 'I know nr_to_write is only 32, but if you just write a
> few blocks more, the system will be better off'.

Well, we can mostly do this now, using the XFS hack:

      wbc->nr_to_write *= 4;

Which is another way of saying, we *know* the page writeback routines
are on crack, so we'll ignore their suggestion of how many pages to
write, and we'll try to write more than what they asked us to write.

(This wasn't a proposed change; it's in Linux 2.6 mainline already;
see fs/xfs/linux-2.6/xfs_aops.c, in xfs_vm_writepage).  The fact that
filesystems are playing games like this should be a clear indication
that things are badly broken above....

> > As to the extends, shouldn't ext4 allocate extends based on the amount
> > of dirty pages in the file instead of however much we're going to write
> > out now?
> 
> It probably does a mixture of both.

It does do a mixture, but in a fairly primitive way.  I was thinking
about writing some ugly code to more precisely determine how many
dirty-and-delayed-allocation-pages exist beyond what we've currently
requested to write, but it seemed like most of the problem would be
solved simply by having the page writeback routines simply send more
pages down to the filesystem, instead of having the file system work
around brain damage in the VM writeback routines.

						- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ