[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <4ce8e384-598c-cf93-96a4-5a4fca82aff8@huawei.com>
Date: Fri, 18 Oct 2019 20:22:50 +0800
From: Yunfeng Ye <yeyunfeng@...wei.com>
To: Mark Rutland <mark.rutland@....com>
CC: <catalin.marinas@....com>, <will@...nel.org>,
<kstewart@...uxfoundation.org>, <sudeep.holla@....com>,
<gregkh@...uxfoundation.org>, <lorenzo.pieralisi@....com>,
<tglx@...utronix.de>, <David.Laight@...LAB.COM>,
<ard.biesheuvel@...aro.org>,
"hushiyuan@...wei.com" <hushiyuan@...wei.com>,
"linfeilong@...wei.com" <linfeilong@...wei.com>,
<wuyun.wu@...wei.com>, <linux-kernel@...r.kernel.org>,
<linux-arm-kernel@...ts.infradead.org>
Subject: Re: [PATCH V3] arm64: psci: Reduce waiting time for
cpu_psci_cpu_kill()
On 2019/10/18 19:41, Mark Rutland wrote:
> On Fri, Oct 18, 2019 at 07:24:14PM +0800, Yunfeng Ye wrote:
>> In a case like suspend-to-disk, a large number of CPU cores need to be
>> shut down. At present, the CPU hotplug operation is serialised, and the
>> CPU cores can only be shut down one by one. In this process, if PSCI
>> affinity_info() does not return LEVEL_OFF quickly, cpu_psci_cpu_kill()
>> needs to wait for 10ms. If hundreds of CPU cores need to be shut down,
>> it will take a long time.
>
> Do we have an idea of roughly how long a CPU _usually_ takes to
> transition state?
>
> i.e. are we _just_ missing the transition the first time we call
> AFFINITY_INFO?
>
we have test that in most case is less than 1ms, 50us-500us. the time not
only include hardware state transition, but also include flush caches in BIOS.
and flush caches operation is time-consuming.
>> Normally, it is no need to wait 10ms in cpu_psci_cpu_kill(). So change
>> the wait interval from 10 ms to max 1 ms and use usleep_range() instead
>> of msleep() for more accurate schedule.
>>
>> In addition, reduce the time interval will increase the messages output,
>> so remove the "Retry ..." message, instead, put the number of waiting
>> times to the sucessful message.
>>
>> Signed-off-by: Yunfeng Ye <yeyunfeng@...wei.com>
>> ---
>> v2 -> v3:
>> - update the comment
>> - remove the busy-wait logic, modify the loop logic and output message
>>
>> v1 -> v2:
>> - use usleep_range() instead of udelay() after waiting for a while
>>
>> arch/arm64/kernel/psci.c | 7 +++----
>> 1 file changed, 3 insertions(+), 4 deletions(-)
>>
>> diff --git a/arch/arm64/kernel/psci.c b/arch/arm64/kernel/psci.c
>> index c9f72b2665f1..00b8c0825a08 100644
>> --- a/arch/arm64/kernel/psci.c
>> +++ b/arch/arm64/kernel/psci.c
>> @@ -91,15 +91,14 @@ static int cpu_psci_cpu_kill(unsigned int cpu)
>> * while it is dying. So, try again a few times.
>> */
>>
>> - for (i = 0; i < 10; i++) {
>> + for (i = 0; i < 100; i++) {
>> err = psci_ops.affinity_info(cpu_logical_map(cpu), 0);
>> if (err == PSCI_0_2_AFFINITY_LEVEL_OFF) {
>> - pr_info("CPU%d killed.\n", cpu);
>> + pr_info("CPU%d killed by waiting %d loops.\n", cpu, i);
>
> Could we please make that:
>
> pr_info("CPU%d killed (polled %d times)\n", cpu, i + 1);
>
ok, thanks.
>
>
>> return 0;
>> }
>>
>> - msleep(10);
>> - pr_info("Retrying again to check for CPU kill\n");
>> + usleep_range(100, 1000);
>
> Hmm, so now we'll wait somewhere between 10ms and 100ms before giving up
> on a CPU depending on how long we actually sleep for each iteration of
> the loop. That should be called out in the commit message.
>
> That could matter for kdump when you have a large number of CPUs, as in
> the worst case for 256 CPUs we've gone from ~2.6s to ~26s. But tbh in
> that case I'm not sure I care that much...
>
> In the majority of cases I'd hope AFFINITY_INFO would return OFF after
> an iteration or two.
>
Normally it will no need so much time.
> Thanks,
> Mark.
>
> .
>
Powered by blists - more mailing lists