lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4AD3738B.6050200@goop.org>
Date:	Mon, 12 Oct 2009 11:20:59 -0700
From:	Jeremy Fitzhardinge <jeremy@...p.org>
To:	Avi Kivity <avi@...hat.com>
CC:	Dan Magenheimer <dan.magenheimer@...cle.com>,
	Jeremy Fitzhardinge <jeremy.fitzhardinge@...rix.com>,
	kurt.hackel@...cle.com, the arch/x86 maintainers <x86@...nel.org>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Glauber de Oliveira Costa <gcosta@...hat.com>,
	Xen-devel <xen-devel@...ts.xensource.com>,
	Keir Fraser <keir.fraser@...citrix.com>,
	Zach Brown <zach.brown@...cle.com>,
	Chris Mason <chris.mason@...cle.com>
Subject: Re: [Xen-devel] Re: [PATCH 3/5] x86/pvclock: add vsyscall	implementation

On 10/10/09 11:10, Avi Kivity wrote:
> On 10/10/2009 02:24 AM, Jeremy Fitzhardinge wrote:
>> On 10/07/09 03:25, Avi Kivity wrote:
>>   
>>> def try_pvclock_vtime():
>>>    tsc, p0 = rdtscp()
>>>    v0 = pvclock[p0].version
>>>    tsc, p = rdtscp()
>>>    t = pvclock_time(pvclock[p], tsc)
>>>    if p != p0 or pvclock[p].version != v0:
>>>       raise Exception("Processor or timebased change under our feet")
>>>    return t
>>>      
>> This doesn't quite work.
>>
>> If we end up migrating some time after the first rdtscp, then the
>> accesses to pvclock[] will be cross-cpu.  Since we don't made any strong
>> SMP memory ordering guarantees on updating the structure, the snapshot
>> isn't guaranteed to be consistent even if we re-check the version at the
>> end.
>>    
>
> We only hit this if we have a double migration, otherwise we see p != p0.
>
> Most likely all existing implementations do have a write barrier on
> the guest entry path, so if we add a read barrier between the two
> compares, that ensures we're reading from the same cpu again.

There's a second problem:  If the time_info gets updated between the
first rdtscp and the first version fetch, then we won't have a
consistent tsc,time_info pair.  You could check if tsc_timestamp is >
tsc, but that won't necessarily work on save/restore/migrate.

>> So to use rdtscp we need to either redefine the update of
>> pvclock_vcpu_time_info to be SMP-safe, or keep the additional migration
>> check.
>>    
>
> I think we can update the ABI after verifying all implementations do
> have a write barrier.
>

I suppose that works if you assume that:

   1. every task->vcpu migration is associated with a hv/guest context
      switch, and
   2. every hv/guest context switch is a write barrier

I guess 2 is a given, but I can at least imagine cases where 1 might not
be true.  Maybe.  It all seems very subtle.

And I don't really see a gain.  You avoid maintaining a second version
number, but at the cost of two rdtscps.  In my measurements, the whole
vsyscall takes around 100ns to run, and a single rdtsc takes about 30,
so 30% of total.  Unlike rdtsc, rdtscp is documented as being ordered in
the instruction stream, and so will take at least as long; two of them
will completely blow the vsyscall execution time.

(By contrast, lsl only takes around 10ns, which suggests it should be
used preferentially in vgetcpu anyway.)

AMD CPUs have traditionally been much better than Intel at these kinds
of things, so maybe rdtscp makes sense there.  Or maybe Nehalem is much
better than my Core2 Q6600.

    J

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ