lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4BC8D115.2010900@redhat.com>
Date:	Fri, 16 Apr 2010 11:05:25 -1000
From:	Zachary Amsden <zamsden@...hat.com>
To:	Jeremy Fitzhardinge <jeremy@...p.org>
CC:	Glauber Costa <glommer@...hat.com>, kvm@...r.kernel.org,
	linux-kernel@...r.kernel.org, avi@...hat.com,
	Marcelo Tosatti <mtosatti@...hat.com>
Subject: Re: [PATCH 1/5] Add a global synchronization point for pvclock

On 04/16/2010 10:36 AM, Jeremy Fitzhardinge wrote:
> On 04/15/2010 11:37 AM, Glauber Costa wrote:
>    
>> In recent stress tests, it was found that pvclock-based systems
>> could seriously warp in smp systems. Using ingo's time-warp-test.c,
>> I could trigger a scenario as bad as 1.5mi warps a minute in some systems.
>>
>>      
> Is that "1.5 million"?
>
>    
>> (to be fair, it wasn't that bad in most of them). Investigating further, I
>> found out that such warps were caused by the very offset-based calculation
>> pvclock is based on.
>>
>>      
> Is the problem that the tscs are starting out of sync, or that they're
> drifting relative to each other over time?  Do the problems become worse
> the longer the uptime?  How large are the offsets we're talking about here?
>    

This is one source of the problem, but the same thing happens at many 
levels... tsc may start out of sync, drift between sockets, be badly 
re-calibrated by the BIOS, etc... the issue persists even if the TSCs 
are perfectly in sync - the measurement of them is not.

So reading TSC == 100,000 units at time A and then waiting 10 units, one 
may read TSC == 100,010 +/- 5 units because the code stream is not 
perfectly serialized - nor can it be.  There will always be some amount 
of error unless running in perfect lock-step, which only happens in a 
simulator.

This inherent measurement error can cause apparent time to go backwards 
when measured simultaneously across multiple CPUs, or when 
re-calibrating against an external clocksource.  Combined with other 
factors as above, it can be of sufficient magnitude to be noticed.  KVM 
clock is particularly exposed to the problem because the TSC is measured 
and recalibrated for each virtual CPU whenever there is a physical CPU 
switch, so micro-adjustments forwards and backwards may occur during the 
recalibration - and appear as a real backwards time warp to the guest.  
I have some patches to fix that issue, but the SMP problem remains to be 
fixed - and is addressed quite thoroughly by this patch.

Zach
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ