[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87r0qwfrm0.ffs@tglx>
Date: Wed, 31 May 2023 17:27:51 +0200
From: Thomas Gleixner <tglx@...utronix.de>
To: Peter Zijlstra <peterz@...radead.org>, bigeasy@...utronix.de
Cc: mark.rutland@....com, maz@...nel.org, catalin.marinas@....com,
will@...nel.org, chenhuacai@...nel.org, kernel@...0n.name,
hca@...ux.ibm.com, gor@...ux.ibm.com, agordeev@...ux.ibm.com,
borntraeger@...ux.ibm.com, svens@...ux.ibm.com,
pbonzini@...hat.com, wanpengli@...cent.com, vkuznets@...hat.com,
mingo@...hat.com, bp@...en8.de, dave.hansen@...ux.intel.com,
x86@...nel.org, hpa@...or.com, jgross@...e.com,
boris.ostrovsky@...cle.com, daniel.lezcano@...aro.org,
kys@...rosoft.com, haiyangz@...rosoft.com, wei.liu@...nel.org,
decui@...rosoft.com, rafael@...nel.org, peterz@...radead.org,
longman@...hat.com, boqun.feng@...il.com, pmladek@...e.com,
senozhatsky@...omium.org, rostedt@...dmis.org,
john.ogness@...utronix.de, juri.lelli@...hat.com,
vincent.guittot@...aro.org, dietmar.eggemann@....com,
bsegall@...gle.com, mgorman@...e.de, bristot@...hat.com,
vschneid@...hat.com, jstultz@...gle.com, sboyd@...nel.org,
linux-kernel@...r.kernel.org, loongarch@...ts.linux.dev,
linux-s390@...r.kernel.org, kvm@...r.kernel.org,
linux-hyperv@...r.kernel.org, linux-pm@...r.kernel.org
Subject: Re: [PATCH v2 08/13] x86/vdso: Fix gettimeofday masking
On Fri, May 19 2023 at 12:21, Peter Zijlstra wrote:
> Because of how the virtual clocks use U64_MAX as an exception value
> instead of a valid time, the clocks can no longer be assumed to wrap
> cleanly. This is then compounded by arch_vdso_cycles_ok() rejecting
> everything with the MSB/Sign-bit set.
>
> Therefore, the effective mask becomes S64_MAX, and the comment with
> vdso_calc_delta() that states the mask is U64_MAX and isn't optimized
> out is just plain silly.
>
> Now, the code has a negative filter -- to deal with TSC wobbles:
>
> if (cycles > last)
>
> which is just plain wrong, because it should've been written as:
>
> if ((s64)(cycles - last) > 0)
>
> to take wrapping into account, but per all the above, we don't
> actually wrap on u64 anymore.
Indeed. The rationale was that you need ~146 years uptime with a 4GHz
TSC or ~584 years with 1GHz to actually reach the wrap around point.
Though I can see your point to make sure that silly BIOSes or VMMs
cannot cause havoc by accident or malice.
Did anyone ever validate that wrap around on TSC including TSC deadline
timer works correctly?
I have faint memories of TSC_ADJUST, which I prefer not to bring back to
main memory :)
Thanks,
tglx
Powered by blists - more mailing lists