[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100316102637.GA23584@lst.de>
Date: Tue, 16 Mar 2010 11:26:37 +0100
From: Christoph Hellwig <hch@....de>
To: Avi Kivity <avi@...hat.com>
Cc: Chris Webb <chris@...chsys.com>, balbir@...ux.vnet.ibm.com,
KVM development list <kvm@...r.kernel.org>,
Rik van Riel <riel@...riel.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Christoph Hellwig <hch@....de>, Kevin Wolf <kwolf@...hat.com>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter
Avi,
cache=writeback can be faster than cache=none for the same reasons
a disk cache speeds up access. As long as the I/O mix contains more
asynchronous then synchronous writes it allows the host to do much
more reordering, only limited by the cache size (which can be quite
huge when using the host pagecache) and the amount of cache flushes
coming from the host. If you have a fsync heavy workload or metadata
operation with a filesystem like the current XFS you will get lots
of cache flushes that make the use of the additional cache limits.
If you don't have a of lot of cache flushes, e.g. due to dumb
applications that do not issue fsync, or even run ext3 in it's default
mode never issues cache flushes the benefit will be enormous, but the
data loss and possible corruption will be enormous.
But even for something like btrfs that does provide data integrity
but issues cache flushes fairly effeciently data=writeback may
provide a quite nice speedup, especially if using multiple guest
accessing the same spindle(s).
But I wouldn't be surprised if IBM's exteme differences are indeed due
to the extremly unsafe ext3 default behaviour.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists