lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 5 Jan 2009 16:16:07 -0500
From:	Theodore Tso <tytso@....edu>
To:	Jens Axboe <jens.axboe@...cle.com>
Cc:	Andrew Morton <akpm@...ux-foundation.org>,
	linux-ext4@...r.kernel.org, Arjan van de Ven <arjan@...radead.org>
Subject: Re: [PATCH, RFC] Use WRITE_SYNC in __block_write_full_page() if
	WBC_SYNC_ALL

On Mon, Jan 05, 2009 at 08:38:20PM +0100, Jens Axboe wrote:
> On Mon, Jan 05 2009, Theodore Tso wrote:
> > So long-term, I suspect the hueristic which makes sense is that in the
> > case where there is an fsync() in progress, any writes which take
> > place as a result of that fsync (which includes the journal records as
> > well as ordered writes that are being forced out as a result of
> > data=ordered and which block the fsync from returning), should get a
> > hint which propagates down to the block layer that these writes *are*
> > synchronous in that someone is waiting for them to complete.  They
> 
> If someone is waiting for them, they are by definition sync!

Surely.  :-)

Andrew's argument is that someone *shouldn't* be waiting for them ---
and he's right, although in the case of fsync() in particular, there's
nothing we can do; there will be a userspace application waiting by
definition.

The bigger problem right now is until we split up the meaning of
"unplug the I/O queue" with "mark the I/O as synchronous", right now
the way data ordered mode works is all of the data blocks get pushed
out in 4k chunks.  So in the worst case, if the user has just written
some 200 megabytes of vmlinuz and kernel modules, and then calls
fsync(), the block I/O layer might get flooded with some 50,000+ 4k
writes, and if they are all BIO_RW_SYNC, they might not get coalesced
properly, and the result would be badness.  One could argue that
journal layer should do doing a better job of coalescing the write
requests, but historically the block layer has done this for us, so
why add duplicate functionality at the journalling layer?

In any case, that's why I'm really not convinced we can afford to use
BIO_RW_SYNC until we separate out the queue unplug functionality.
Maybe what makes sence is to have two flags, BIO_RW_UNPLUG and
BIO_RW_SYNCIO, and then make BIO_RW_SYNC be defined to be
(BIO_RW_UNPLUG|BIO_RW_SYNCIO)?

> > shouldn't necessarily be prioritized ahead of other reads (unless they
> > are readahead operations that couldn't be combined with reads that
> > *are* synchronous that someone is waiting for completion), but they
> > should be prioritized ahead of asynchronous writes.
> 
> And that is *exactly* what flagging the write as sync will do...

Great, so once we separate out the queue unplug request, I think this
should be exactly what we need.

							- Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-ext4" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ