lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <87h76ew3sb.ffs@tglx>
Date:   Thu, 28 Apr 2022 00:38:12 +0200
From:   Thomas Gleixner <tglx@...utronix.de>
To:     Waiman Long <longman@...hat.com>, Ingo Molnar <mingo@...hat.com>,
        Borislav Petkov <bp@...en8.de>,
        Dave Hansen <dave.hansen@...ux.intel.com>
Cc:     x86@...nel.org, linux-kernel@...r.kernel.org,
        "H. Peter Anvin" <hpa@...or.com>, Feng Tang <feng.tang@...el.com>,
        Bill Gray <bgray@...hat.com>, Jirka Hladky <jhladky@...hat.com>
Subject: Re: [PATCH 2/2] x86/tsc_sync: Add synchronization overhead to tsc
 adjustment

On Tue, Apr 26 2022 at 11:36, Waiman Long wrote:
> On 4/25/22 15:24, Thomas Gleixner wrote:
>> Yes. It's clear that the initial sync overhead is due to the cache line
>> being remote, but I rather underestimate the compensation. Aside of that
>> it's not guaranteed that the cache line is actually remote on the first
>> access. It's by chance, but not by design.
>
> In check_tsc_warp(), the (unlikely(prev > now) check may only be 
> triggered to record the possible wrap if last_tsc was previously written 
> to by another cpu. That requires the transfer of lock cacheline from the 
> remote cpu to local cpu as well. So sync overhead with remote cacheline 
> is what really matters here. I had actually thought about just measuring 
> local cacheline sync overhead so as to underestimate it and I am fine 
> about doing it.

Fair enough, but what I meant is that when estimating the actual sync
overhead then there is no guarantee that the cache line is remote.

The CPU which does that estimation might have been the last to lock,
there is no guarantee that the reference CPU locked last or wrote to the
cache line last.

>> IOW, TSC runs with a constant frequency independent of the actual CPU
>> frequency, ergo the CPU frequency dependent execution time has an
>> influence on the resulting compensation value, no?
>>
>> On the machine I tested on, it's a factor of 3 between the minimal and
>> the maximal CPU frequency, which makes quite a difference, right?
>
> Yes, I understand that. The measurement of sync_overhead is for 
> estimating the delay (in TSC cycles) that the locking overhead 
> introduces. With 1000MHz frequency, the delay in TSC cycle will be 
> double that of a cpu running at 2000MHz. So you need more compensation 
> in this case. That is why I said that as long as clock frequency doesn't 
> change in the check_tsc_wrap() loop and the sync_overhead measurement 
> part of the code, the actual cpu frequency does not matter here.

I grant you that it does not matter for the loop under the assumption
that the loop runs at constant frequency, but is that a guarantee that
it does not matter later on?

If you overcompensate by a factor of 3 because the upcoming CPU ran at
the lowest frequency, then it might become visible later when everything
runs at full speed.

> However about we half the measure sync_overhead as compensation to avoid 
> over-estimation, but probably increase the chance that we need a second 
> adjustment of TSC wrap.

Half of what?

> With this patch applied, the measured overhead on the same CooperLake
> system on different reboot runs varies from 104 to 326.

Half of something which jumps around? Not convinced. :)

Btw:
>> Yes, I will try that experiment and report back the results.

Could you please do that? I really like to see the data points.

It's so sad that we have still to deal with this nonsense just because
HW people don't give a damn.

Thanks,

        tglx

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ