[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4f02fe46-b253-2809-0af7-f2e9da091fe9@redhat.com>
Date: Mon, 25 Apr 2022 09:20:59 -0400
From: Waiman Long <longman@...hat.com>
To: Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>, Borislav Petkov <bp@...en8.de>,
Dave Hansen <dave.hansen@...ux.intel.com>
Cc: x86@...nel.org, linux-kernel@...r.kernel.org,
"H. Peter Anvin" <hpa@...or.com>, Feng Tang <feng.tang@...el.com>,
Bill Gray <bgray@...hat.com>, Jirka Hladky <jhladky@...hat.com>
Subject: Re: [PATCH 2/2] x86/tsc_sync: Add synchronization overhead to tsc
adjustment
On 4/22/22 06:41, Thomas Gleixner wrote:
> On Mon, Apr 18 2022 at 11:41, Waiman Long wrote:
>> On 4/3/22 06:03, Thomas Gleixner wrote:
>> [ 0.008815] TSC ADJUST compensate: CPU36 observed 86056 warp
>> (overhead 150). Adjust: 86206
>> [ 0.008815] TSC ADJUST compensate: CPU54 observed 86148 warp
>> (overhead 158). Adjust: 86306
>>
>>> Also if the compensation value is at the upper end and the real overhead
>>> is way lower then the validation run might end up with the opposite
>>> result. I'm a bit worried about this variation.
>> I also have a little concern about that. That is why I add patch 1 to
>> minimize as much external interference as possible. For the TSC
>> adjustment samples that I got so far, I have never seen one that need a
>> 2nd adjustment to go backward.
> I did some experiments and noticed that the boot time overhead is
> different from the overhead when doing the sync check after boot
> (offline a socket and on/offline the first CPU of it several times).
>
> During boot the overhead is lower on this machine (SKL-X), during
> runtime it's way higher and more noisy.
>
> The noise can be pretty much eliminated by running the sync_overhead
> measurement multiple times and building the average.
>
> The reason why it is higher is that after offlining the socket the CPU
> comes back up with a frequency of 700Mhz while during boot it runs with
> 2100Mhz.
>
> Sync overhead: 118
> Sync overhead: 51 A: 22466 M: 22448 F: 2101683
One explanation of the sync overhead difference (118 vs 51) here is
whether the lock cacheline is local or remote. My analysis the
interaction between check_tsc_sync_source() and check_tsc_sync_target()
is that real overhead is about locking with remote cacheline (local to
source, remote to target). When you do a 256 loop of locking, it is all
local cacheline. That is why the overhead is lower. It also depends on
if the remote cacheline is in the same socket or a different socket.
>
> Sync overhead: 178
> Sync overhead: 152 A: 22477 M: 67380 F: 700529
>
> Sync overhead: 212
> Sync overhead: 152 A: 22475 M: 67380 F: 700467
>
> Sync overhead: 153
> Sync overhead: 152 A: 22497 M: 67452 F: 700404
>
> Can you try the patch below and check whether the overhead stabilizes
> accross several attempts on that copperlake machine and whether the
> frequency is always the same or varies?
Yes, I will try that experiment and report back the results.
>
> Independent of the outcome on that, I think have to take the actual CPU
> frequency into account for calculating the overhead.
Assuming that the clock frequency remains the same during the
check_tsc_warp() loop and the sync overhead computation time, I don't
think the actual clock frequency matters much. However, it will be a
different matter if the frequency does change. In this case, it is more
likely the frequency will go up than down. Right? IOW, we may
underestimate the sync overhead in this case. I think it is better than
overestimating it.
Cheers,
Longman
>
> Thanks,
>
> tglx
> ---
> --- a/arch/x86/kernel/tsc_sync.c
> +++ b/arch/x86/kernel/tsc_sync.c
> @@ -446,7 +446,8 @@ void check_tsc_sync_target(void)
> unsigned int cpu = smp_processor_id();
> cycles_t cur_max_warp, gbl_max_warp;
> cycles_t start, sync_overhead;
> - int cpus = 2;
> + u64 ap1, ap2, mp1, mp2;
> + int i, cpus = 2;
>
> /* Also aborts if there is no TSC. */
> if (unsynchronized_tsc())
> @@ -514,6 +515,23 @@ void check_tsc_sync_target(void)
> arch_spin_lock(&sync.lock);
> arch_spin_unlock(&sync.lock);
> sync_overhead = rdtsc_ordered() - start;
> + pr_info("Sync overhead: %lld\n", sync_overhead);
> +
> + sync_overhead = 0;
> + rdmsrl(MSR_IA32_APERF, ap1);
> + rdmsrl(MSR_IA32_MPERF, mp1);
> + for (i = 0; i < 256; i++) {
> + start = rdtsc_ordered();
> + arch_spin_lock(&sync.lock);
> + arch_spin_unlock(&sync.lock);
> + sync_overhead += rdtsc_ordered() - start;
> + };
> + rdmsrl(MSR_IA32_APERF, ap2);
> + rdmsrl(MSR_IA32_MPERF, mp2);
> + ap2 -= ap1;
> + mp2 -= mp1;
> + pr_info("Sync overhead: %lld A: %llu M: %llu F: %llu\n", sync_overhead >> 8,
> + ap2, mp2, mp2 ? div64_u64((cpu_khz * ap2), mp2) : 0);
>
> /*
> * If the warp value of this CPU is 0, then the other CPU
>
Powered by blists - more mailing lists