lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <570E67B1.3000708@hpe.com>
Date:	Wed, 13 Apr 2016 11:37:21 -0400
From:	Waiman Long <waiman.long@....com>
To:	Ingo Molnar <mingo@...nel.org>
CC:	Thomas Gleixner <tglx@...utronix.de>,
	Ingo Molnar <mingo@...hat.com>,
	"H. Peter Anvin" <hpa@...or.com>, <linux-kernel@...r.kernel.org>,
	<x86@...nel.org>, Jiang Liu <jiang.liu@...ux.intel.com>,
	Borislav Petkov <bp@...e.de>,
	Andy Lutomirski <luto@...nel.org>,
	Scott J Norton <scott.norton@....com>,
	Douglas Hatch <doug.hatch@....com>,
	Randy Wright <rwright@....com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: [PATCH v4] x86/hpet: Reduce HPET counter read contention

On 04/13/2016 02:18 AM, Ingo Molnar wrote:
> * Waiman Long<Waiman.Long@....com>  wrote:
>
>> On a large system with many CPUs, using HPET as the clock source can
>> have a significant impact on the overall system performance because
>> of the following reasons:
>>   1) There is a single HPET counter shared by all the CPUs.
>>   2) HPET counter reading is a very slow operation.
>>
>> Using HPET as the default clock source may happen when, for example,
>> the TSC clock calibration exceeds the allowable tolerance. Something
>> the performance slowdown can be so severe that the system may crash
>> because of a NMI watchdog soft lockup, for example.
>>   /*
>> + * Reading the HPET counter is a very slow operation. If a large number of
>> + * CPUs are trying to access the HPET counter simultaneously, it can cause
>> + * massive delay and slow down system performance dramatically. This may
>> + * happen when HPET is the default clock source instead of TSC. For a
>> + * really large system with hundreds of CPUs, the slowdown may be so
>> + * severe that it may actually crash the system because of a NMI watchdog
>> + * soft lockup, for example.
>> + *
>> + * If multiple CPUs are trying to access the HPET counter at the same time,
>> + * we don't actually need to read the counter multiple times. Instead, the
>> + * other CPUs can use the counter value read by the first CPU in the group.
> Hm, weird, so how can this:
>
>    static cycle_t read_hpet(struct clocksource *cs)
>    {
>           return (cycle_t)hpet_readl(HPET_COUNTER);
>    }
>
> ... cause an actual slowdown of that magnitude? This goes straight to MMIO. So is
> the hardware so terminally broken?

I only know that accessing the HPET counter is VERY slow. Andy said that 
it takes at least a few us. I haven't done that measurement myself.

I am not sure what kind of contention will happen when multiple CPUs are 
accessing it at the same time. It is not just the clock tick interrupt 
handler that need to access time, many system call will also cause the 
current time to be accessed. When we have hundred of CPUs in the system, 
it is not too hard to cause a soft lockup if hpet is the default clock 
source.

> How good is the TSC clocksource on the affected system? Could we simply always use
> the TSC (and not use the HPET at all as a clocksource), instead of trying to fix
> broken hardware?
>
> Thanks,
>
> 	Ingo

The TSC clocksource, on the other hand, is per cpu. So there won't be 
much contention in accessing it. Normally TSC will be used the default 
clock source. However, if there is too much variation in the actual 
clock speeds of the individual CPUs, it will cause the TSC calibration 
to fail and revert to use hpet as the clock source. During bootup, hpet 
will usually be selected as the default clock source first. After a 
short time, the TSC will take over as the default clock source. Problem 
can happen during that short period of transition time too. In fact, we 
have 16-socket Broadwell-EX systems that has this soft lockup problem 
once in a few reboot cycles which prompted me to find a solution to fix it.

Cheers,
Longman



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ