lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100317165725.GB29548@lst.de>
Date:	Wed, 17 Mar 2010 17:57:25 +0100
From:	Christoph Hellwig <hch@....de>
To:	Avi Kivity <avi@...hat.com>
Cc:	Chris Webb <chris@...chsys.com>, balbir@...ux.vnet.ibm.com,
	KVM development list <kvm@...r.kernel.org>,
	Rik van Riel <riel@...riel.com>,
	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Christoph Hellwig <hch@....de>, Kevin Wolf <kwolf@...hat.com>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter

On Wed, Mar 17, 2010 at 06:40:30PM +0200, Avi Kivity wrote:
> Chris, can you carry out an experiment?  Write a program that pwrite()s 
> a byte to a file at the same location repeatedly, with the file opened 
> using O_SYNC.  Measure the write rate, and run blktrace on the host to 
> see what the disk (/dev/sda, not the volume) sees.  Should be a (write, 
> flush, write, flush) per pwrite pattern or similar (for writing the data 
> and a journal block, perhaps even three writes will be needed).
> 
> Then scale this across multiple guests, measure and trace again.  If 
> we're lucky, the flushes will be coalesced, if not, we need to work on it.

As the person who has written quite a bit of the current O_SYNC
implementation and also reviewed the rest of it I can tell you that
those flushes won't be coalesced.  If we always rewrite the same block
we do the cache flush from the fsync method and there's is nothing
to coalesced it there.  If you actually do modify metadata (e.g. by
using the new real O_SYNC instead of the old one that always was O_DSYNC
that I introduced in 2.6.33 but that isn't picked up by userspace yet)
you might hit a very limited transaction merging window in some
filesystems, but it's generally very small for a good reason.  If it
were too large we'd make the once progress wait for I/O in another just
because we might expect transactions to coalesced later.  There's been
some long discussion about that fsync transaction batching tuning
for ext3 a while ago.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ