[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CALCETrXeXCvbxAuRuLwWoF3-zvjhzzjj46VZ3RfgUEhb0SeK6A@mail.gmail.com>
Date: Tue, 8 Dec 2020 20:08:56 -0800
From: Andy Lutomirski <luto@...nel.org>
To: Thomas Gleixner <tglx@...utronix.de>
Cc: Andy Lutomirski <luto@...nel.org>,
Marcelo Tosatti <mtosatti@...hat.com>,
Maxim Levitsky <mlevitsk@...hat.com>,
kvm list <kvm@...r.kernel.org>,
"H. Peter Anvin" <hpa@...or.com>,
Paolo Bonzini <pbonzini@...hat.com>,
Jonathan Corbet <corbet@....net>,
Jim Mattson <jmattson@...gle.com>,
Wanpeng Li <wanpengli@...cent.com>,
"open list:KERNEL SELFTEST FRAMEWORK"
<linux-kselftest@...r.kernel.org>,
Vitaly Kuznetsov <vkuznets@...hat.com>,
Sean Christopherson <sean.j.christopherson@...el.com>,
open list <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
"maintainer:X86 ARCHITECTURE (32-BIT AND 64-BIT)" <x86@...nel.org>,
Joerg Roedel <joro@...tes.org>, Borislav Petkov <bp@...en8.de>,
Shuah Khan <shuah@...nel.org>,
Andrew Jones <drjones@...hat.com>,
Oliver Upton <oupton@...gle.com>,
"open list:DOCUMENTATION" <linux-doc@...r.kernel.org>
Subject: Re: [PATCH v2 1/3] KVM: x86: implement KVM_{GET|SET}_TSC_STATE
On Tue, Dec 8, 2020 at 4:19 PM Thomas Gleixner <tglx@...utronix.de> wrote:
>
> On Tue, Dec 08 2020 at 12:32, Andy Lutomirski wrote:
> >> On Dec 8, 2020, at 11:25 AM, Thomas Gleixner <tglx@...utronix.de> wrote:
> >> One issue here is that guests might want to run their own NTP/PTP. One
> >> reason to do that is that some people prefer the leap second smearing
> >> NTP servers.
> >
> > I would hope that using this part would be optional on the guest’s
> > part. Guests should be able to use just the CLOCK_MONOTONIC_RAW part
> > or fancier stuff at their option.
> >
> > (Hmm, it would, in principle, be possible for a guest to use the
> > host’s TAI but still smear leap seconds. Even without virt, smearing
> > could be a per-timens option.)
>
> No. Don't even think about it. Read the thread:
>
> https://lore.kernel.org/r/20201030110229.43f0773b@jawa
>
> all the way through the end and then come up with a real proposal which
> solves all of the issues mentioned there.
You're misunderstanding me, which is entirely reasonable, since my
description was crap. In particular, what I meant by smearing is not
at all what's done today. Let me try again. The thing below is my
proposal, not necessarily a description of exactly what happens now.
(I read most of that thread, and I read most of this thread, and I've
hacked on the time code, cursed at the KVM code, modified the KVM
code, cursed at the KVM code some more, etc. None of which is to say
that I have a full understanding of every possible timekeeping nuance,
but I'm pretty sure I can at least pretend to understand some of it.)
We have some time source that we can read (e.g. TSC). Let's call it
read_time(). It returns an integer (64-bits would be nice, but we'll
take what we can get). From the output of read_time(), Linux user
programs, and the kernel itself (and guests perhaps, see below) would
like to produce various outputs. Each of them is protected by a
seqlock that I'll omit in the descriptions below. The operations
below are all protected by a seqlock retry loop. Also, when I say *
below, I mean the usual calculation with a multiplication and a shift.
All of these are only valid if t_start <= read_time() <= t_end and,
and they all assume that read_time() hasn't wrapped and gotten into
that interval again. There is nothing at all we can do in software if
we wrap like this. t_end isn't necessarily something we compute
explicitly --- it might just be the case that, if read_time() > t_end,
our arithmetic overflows and we return garbage. But t_end might be a
real thing on architectures where vdso_cycles_ok() actually does
something (sigh, x86).
CLOCK_MONOTONIC_RAW: not affected by NTP, adjtimex, etc.
return mult[monotonic_raw] * (read_time() - t_start) + offset[monotonic_raw];
CLOCK_MONOTONIC: This is never affected by leap-second smearing. If
userspace tries to smear it in the new mode, userspace gets to keep
all the pieces.
return mult[monotonic] * (read_time() - t_start) + offset[monotonic];
CLOCK_TAI: This is not smeared.
return mult[tai] * (read_time() - t_start) + offset[tai];
CLOCK_SANE_REALTIME: This is not smeared either.
return mult[sane_realtime] * (read_time() - t_start) + offset[sane_realtime];
And yes, we require that mult[monotonic] == mult[tai] == mult[sane_realtime].
CLOCK_SMEARED_REALTIME:
return mult[smeared_realtime] * (read_time() - t_start) +
offset[smeared_realtime]
This is a leap-second-smeared variant of CLOCK_SANE_REALTIME.
CLOCK_REALTIME: maps to CLOCK_SANE_REALTIME or CLOCK_SMEARED_REALTIME
depending on user preference. Doing this without an extra branch
somewhere might take a bit of thought.
If t > t_end, then we fall back to a syscall if we're in user mode and
we fall back to hypercall or we just spin if we're in the kernel. But
see below.
As far as I can tell, if the kernel were to do something utterly
asinine like adding some arbitrary value to TSC_ADJUST on all CPUs,
the kernel could do so correctly by taking the seqlock, making the
change, updating everything, and releasing the seqlock. This would be
nuts, but it's more or less the same thing that happens when a VM
migrates. So I think a VM could migrate a guest without any
particular magic, except that there's a potential race if the old and
new systems happen to have close enough seqlock values that the guest
might start reading on the old host, finish on the new host, see the
same seqlock value, and end up with utter garbage. One way to
mitigate this would be, in paravirt mode, to have an extra per-guest
page that contains a count of how many times the guest has migrated.
Timens would work a lot like it does today, but the mechanism that
tells the vdso code to use timens might need tweaking.
I could easily be missing something that prevents this from working,
but I'm not seeing any fundamental problems.
If we want to get fancy, we can make a change that I've contemplated
for a while -- we could make t_end explicit and have two copies of all
these data structures. The reader would use one copy if t < t_change
and a different copy if t >= t_change. This would allow NTP-like code
in usermode to schedule a frequency shift to start at a specific time.
With some care, it would also allow the timekeeping code to update the
data structures without causing clock_gettime() to block while the
timekeeping code is running on a different CPU.
One other thing that might be worth noting: there's another thread
about "vmgenid". It's plausible that it's worth considering stopping
the guest or perhaps interrupting all vCPUs to allow it to take some
careful actions on migration for reasons that have nothing to do with
timekeeping.
Powered by blists - more mailing lists