[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110920200351.GH6690@jl-vm1.vm.bytemark.co.uk>
Date: Tue, 20 Sep 2011 21:03:51 +0100
From: Jamie Lokier <jamie@...reable.org>
To: Marcelo Tosatti <mtosatti@...hat.com>
Cc: Eric B Munson <emunson@...bm.net>,
Anthony Liguori <anthony@...emonkey.ws>, avi@...hat.com,
tglx@...utronix.de, mingo@...hat.com, hpa@...or.com, arnd@...db.de,
riel@...hat.com, kvm@...r.kernel.org, linux-kernel@...r.kernel.org,
linux-arch@...r.kernel.org, kvm-ppc@...r.kernel.org,
aliguori@...ibm.com, raharper@...ibm.com, kvm-ia64@...r.kernel.org,
Glauber Costa <glommer@...il.com>, mjwolf@...ibm.com
Subject: Re: [PATCH 0/4] Avoid soft lockup message when KVM is stopped by host
Marcelo Tosatti wrote:
> In case the VM stops for whatever reason, the host system is not
> supposed to adjust time related hardware state to compensate, in an
> attempt to present apparent continuous time.
>
> If you save a VM and then restore it later, it is the guest
> responsability to adjust its time representation.
If the guest doesn't know it's been stopped, then its time
representation will be wrong until it finds out, e.g. after a few
minutes with NTP, or even a seconds can be too long.
That is sad when it happens because it breaks the coherence of any
timed-lease caching the guest is involved in. I.e. where the guest
acquires a lock on some data object (like a file in NFS) that it can
efficiently access without network round trips (similar to MESI), with
all nodes having agreed that it's coherent for, say, 5 seconds before
renewing or breaking. (It's just a way to reduce latency.)
But we can't trust CLOCK_MONOTONIC when a VM is involved, it's just
one of those facts of life. So instead the effort is to try and
detect when a VM is involved and then distrust the clock.
(Non-VM) suspend/resume is similar, but there's usually a way to
be notified about that as it happens.
-- Jamie
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists