[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <1369923919-26981-1-git-send-email-david.vrabel@citrix.com>
Date: Thu, 30 May 2013 15:25:17 +0100
From: David Vrabel <david.vrabel@...rix.com>
To: <xen-devel@...ts.xen.org>
CC: David Vrabel <david.vrabel@...rix.com>,
Konrad Rzeszutek Wilk <konrad.wilk@...cle.com>,
John Stultz <john.stultz@...aro.org>,
<linux-kernel@...r.kernel.org>
Subject: [PATCHv3 0/2] xen: maintain an accurate persistent clock in more cases
The kernel has limited support for updating the persistent clock or
RTC when NTP is synced. This has the following limitations:
* The persistent clock is not updated on step changes. This leaves a
window where it will be incorrect (while NTP resyncs).
* Xen guests use the Xen wallclock as their persistent clock. dom0
maintains this clock so it is persistent for domUs and not dom0
itself.
These series fixes the above limitations and depends on "x86: increase
precision of x86_platform.get/set_wallclock()" which was previously
posted.
[ On a related note, with CONFIG_HZ=1000 sync_cmos_clock() is always
scheduled ~3ms too late which causes it to repeatedly try to
reschedule in ~997 ms and ends up never calling
updated_persistent_clock(). With HZ=250, the error is ~1ms too late
which is close enough.
It is not clear where this systematic error comes from or whether
this is only a Xen specific bug. I don't have time to investigate
right now. ]
Changes since v2:
Don't peek at the timekeeper internals (use __current_kernel_time()
instead). Use the native set_wallclock hook in dom0.
Changes since v1:
Reworked to use the pvclock_gtod notifier to sync the wallclock (this
looked similar to what a KVM host does). update_persistent_clock()
will now only update the CMOS RTC.
David
[1] http://lists.xen.org/archives/html/xen-devel/2013-05/msg01402.html
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists