[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h942qm2j.fsf@vitty.brq.redhat.com>
Date: Fri, 10 Feb 2017 13:15:00 +0100
From: Vitaly Kuznetsov <vkuznets@...hat.com>
To: Andy Lutomirski <luto@...capital.net>
Cc: KY Srinivasan <kys@...rosoft.com>,
Thomas Gleixner <tglx@...utronix.de>,
"x86\@kernel.org" <x86@...nel.org>, Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Haiyang Zhang <haiyangz@...rosoft.com>,
Stephen Hemminger <sthemmin@...rosoft.com>,
Dexuan Cui <decui@...rosoft.com>,
"linux-kernel\@vger.kernel.org" <linux-kernel@...r.kernel.org>,
"devel\@linuxdriverproject.org" <devel@...uxdriverproject.org>,
"virtualization\@lists.linux-foundation.org"
<virtualization@...ts.linux-foundation.org>
Subject: Re: [PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read method
Andy Lutomirski <luto@...capital.net> writes:
> On Thu, Feb 9, 2017 at 12:45 PM, KY Srinivasan <kys@...rosoft.com> wrote:
>>
>>
>>> -----Original Message-----
>>> From: Thomas Gleixner [mailto:tglx@...utronix.de]
>>> Sent: Thursday, February 9, 2017 9:08 AM
>>> To: Vitaly Kuznetsov <vkuznets@...hat.com>
>>> Cc: x86@...nel.org; Andy Lutomirski <luto@...capital.net>; Ingo Molnar
>>> <mingo@...hat.com>; H. Peter Anvin <hpa@...or.com>; KY Srinivasan
>>> <kys@...rosoft.com>; Haiyang Zhang <haiyangz@...rosoft.com>; Stephen
>>> Hemminger <sthemmin@...rosoft.com>; Dexuan Cui
>>> <decui@...rosoft.com>; linux-kernel@...r.kernel.org;
>>> devel@...uxdriverproject.org; virtualization@...ts.linux-foundation.org
>>> Subject: Re: [PATCH 2/2] x86/vdso: Add VCLOCK_HVCLOCK vDSO clock read
>>> method
>>>
>>> On Thu, 9 Feb 2017, Vitaly Kuznetsov wrote:
>>> > +#ifdef CONFIG_HYPERV_TSCPAGE
>>> > +static notrace u64 vread_hvclock(int *mode)
>>> > +{
>>> > + const struct ms_hyperv_tsc_page *tsc_pg =
>>> > + (const struct ms_hyperv_tsc_page *)&hvclock_page;
>>> > + u64 sequence, scale, offset, current_tick, cur_tsc;
>>> > +
>>> > + while (1) {
>>> > + sequence = READ_ONCE(tsc_pg->tsc_sequence);
>>> > + if (!sequence)
>>> > + break;
>>> > +
>>> > + scale = READ_ONCE(tsc_pg->tsc_scale);
>>> > + offset = READ_ONCE(tsc_pg->tsc_offset);
>>> > + rdtscll(cur_tsc);
>>> > +
>>> > + current_tick = mul_u64_u64_shr(cur_tsc, scale, 64) + offset;
>>> > +
>>> > + if (READ_ONCE(tsc_pg->tsc_sequence) == sequence)
>>> > + return current_tick;
>>>
>>> That sequence stuff lacks still a sensible explanation. It's fundamentally
>>> different from the sequence counting we do in the kernel, so documentation
>>> for it is really required.
>>
>> The host is updating multiple fields in this shared TSC page and the sequence number is
>> used to ensure that the guest sees a consistent set values published. If I remember
>> correctly, Xen has a similar mechanism.
>
> So what's the actual protocol? When the hypervisor updates the page,
> does it freeze all guest cpus? If not, how does it maintain
> atomicity?
I don't really know how it is implemented server-side but I *think* that
freezing all CPUs is only required when we want to update *both*
ReferenceTscScale and ReferenceTscOffset at the same time (as Hyper-V is
64-bit only so it can always atomically update 64-bit values)...
--
Vitaly
Powered by blists - more mailing lists