[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100316104422.GA24258@lst.de>
Date: Tue, 16 Mar 2010 11:44:22 +0100
From: Christoph Hellwig <hch@....de>
To: Avi Kivity <avi@...hat.com>
Cc: Christoph Hellwig <hch@....de>, Chris Webb <chris@...chsys.com>,
balbir@...ux.vnet.ibm.com,
KVM development list <kvm@...r.kernel.org>,
Rik van Riel <riel@...riel.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Kevin Wolf <kwolf@...hat.com>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot parameter
On Tue, Mar 16, 2010 at 12:36:31PM +0200, Avi Kivity wrote:
> Are you talking about direct volume access or qcow2?
Doesn't matter.
> For direct volume access, I still don't get it. The number of barriers
> issues by the host must equal (or exceed, but that's pointless) the
> number of barriers issued by the guest. cache=writeback allows the host
> to reorder writes, but so does cache=none. Where does the difference
> come from?
>
> Put it another way. In an unvirtualized environment, if you implement a
> write cache in a storage driver (not device), and sync it on a barrier
> request, would you expect to see a performance improvement?
cache=none only allows very limited reorderning in the host. O_DIRECT
is synchronous on the host, so there's just some very limited reordering
going on in the elevator if we have other I/O going on in parallel.
In addition to that the disk writecache can perform limited reodering
and caching, but the disk cache has a rather limited size. The host
pagecache gives a much wieder opportunity to reorder, especially if
the guest workload is not cache flush heavy. If the guest workload
is extremly cache flush heavy the usefulness of the pagecache is rather
limited, as we'll only use very little of it, but pay by having to do
a data copy. If the workload is not cache flush heavy, and we have
multiple guests doing I/O to the same spindles it will allow the host
do do much more efficient data writeout by beeing able to do better
ordered (less seeky) and bigger I/O (especially if the host has real
storage compared to ide for the guest).
> >If you don't have a of lot of cache flushes, e.g. due to dumb
> >applications that do not issue fsync, or even run ext3 in it's default
> >mode never issues cache flushes the benefit will be enormous, but the
> >data loss and possible corruption will be enormous.
> >
>
> Shouldn't the host never issue cache flushes in this case? (for direct
> volume access; qcow2 still needs flushes for metadata integrity).
If the guest never issues a flush the host will neither, indeed. Data
will only go to disk by background writeout or memory pressure.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists