[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20100322210448.GA12635@arachsys.com>
Date: Mon, 22 Mar 2010 21:04:48 +0000
From: Chris Webb <chris@...chsys.com>
To: Anthony Liguori <anthony@...emonkey.ws>
Cc: Avi Kivity <avi@...hat.com>, balbir@...ux.vnet.ibm.com,
KVM development list <kvm@...r.kernel.org>,
Rik van Riel <riel@...riel.com>,
KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
"linux-mm@...ck.org" <linux-mm@...ck.org>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH][RF C/T/D] Unmapped page cache control - via boot
parameter
Chris Webb <chris@...chsys.com> writes:
> Okay. What I was driving at in describing these systems as 'already broken'
> is that they will already lose data (in this sense) if they're run on bare
> metal with normal commodity SATA disks with their 32MB write caches on. That
> configuration surely describes the vast majority of PC-class desktops and
> servers!
>
> If I understand correctly, your point here is that the small cache on a real
> SATA drive gives a relatively small time window for data loss, whereas the
> worry with cache=writeback is that the host page cache can be gigabytes, so
> the time window for unsynced data to be lost is potentially enormous.
>
> Isn't the fix for that just forcing periodic sync on the host to bound-above
> the time window for unsynced data loss in the guest?
For the benefit of the archives, it turns out the simplest fix for this is
already implemented as a vm sysctl in linux. Set vm.dirty_bytes to 32<<20,
and the size of dirty page cache is bounded above by 32MB, so we are
simulating exactly the case of a SATA drive with a 32MB writeback-cache.
Unless I'm missing something, the risk to guest OSes in this configuration
should therefore be exactly the same as the risk from running on normal
commodity hardware with such drives and no expensive battery-backed RAM.
Cheers,
Chris.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists