[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAJZ5v0hdJhuoX2j=X4C5Rq+GT9_qKtMwgc-P6OJMfQ_36uLaKg@mail.gmail.com>
Date: Thu, 9 Nov 2017 23:30:54 +0100
From: "Rafael J. Wysocki" <rafael@...nel.org>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: WANG Chao <chao.wang@...oud.cn>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Thomas Gleixner <tglx@...utronix.de>,
Ingo Molnar <mingo@...hat.com>,
"H. Peter Anvin" <hpa@...or.com>,
Vikas Shivappa <vikas.shivappa@...ux.intel.com>,
Kate Stewart <kstewart@...uxfoundation.org>,
Len Brown <len.brown@...el.com>,
Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
Philippe Ombredanne <pombredanne@...b.com>,
Mathias Krause <minipli@...glemail.com>,
"the arch/x86 maintainers" <x86@...nel.org>,
"Rafael J. Wysocki" <rafael@...nel.org>,
Linux PM <linux-pm@...r.kernel.org>,
"Rafael J. Wysocki" <rafael.j.wysocki@...el.com>
Subject: Re: [PATCH] x86: use cpufreq_quick_get() for /proc/cpuinfo "cpu MHz" again
On Thu, Nov 9, 2017 at 5:06 PM, Rafael J. Wysocki
<rafael.j.wysocki@...el.com> wrote:
> Hi Linus,
>
> On 11/9/2017 11:38 AM, WANG Chao wrote:
>>
>> Commit 941f5f0f6ef5 (x86: CPU: Fix up "cpu MHz" in /proc/cpuinfo) caused
>> a serious performance issue when reading from /proc/cpuinfo on system
>> with aperfmperf.
>>
>> For each cpu, arch_freq_get_on_cpu() sleeps 20ms to get its frequency.
>> On a system with 64 cpus, it takes 1.5s to finish running `cat
>> /proc/cpuinfo`, while it previously was done in 15ms.
>
> Honestly, I'm not sure what to do to address this ATM.
>
> The last requested frequency is only available in the non-HWP case, so it
> cannot be used universally.
OK, here's an idea.
c_start() can run aperfmperf_snapshot_khz() on all CPUs upfront (say
in parallel), then wait for a while (say 5 ms; the current 20 ms wait
is overkill) and then aperfmperf_snapshot_khz() can be run once on
each CPU in show_cpuinfo() without taking the "stale cache" threshold
into account.
I'm going to try that and see how far I can get with it.
Thanks,
Rafael
Powered by blists - more mailing lists