lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Fri, 24 Jun 2022 21:16:05 +0800
From:   Zhang Qiao <zhangqiao22@...wei.com>
To:     Vincent Guittot <vincent.guittot@...aro.org>,
        David Chen <david.chen@...anix.com>
CC:     "linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
        Ingo Molnar <mingo@...hat.com>
Subject: Re: Perf regression from scheduler load_balance rework in 5.5?


Hi,
在 2022/6/24 16:22, Vincent Guittot 写道:
> On Thu, 23 Jun 2022 at 21:50, David Chen <david.chen@...anix.com> wrote:
>>
>> Hi,
>>
>> I'm working on upgrading our kernel from 4.14 to 5.10
>> However, I'm seeing performance regression when doing rand read from windows client through smbd
>> with a well cached file.
>>
>> One thing I noticed is that on the new kernel, the smbd thread doing socket I/O tends to stay on
>> the same cpu core as the net_rx softirq, where as in the old kernel it tends to be moved around
>> more randomly. And when they are on the same cpu, it tends to saturate the cpu more and causes
>> performance to drop.
>>
>> For example, here's the duration (ns) the thread spend on each cpu I captured using bpftrace
>> On 4.14:
>> @cputime[7]: 20741458382
>> @cputime[0]: 25219285005
>> @cputime[6]: 30892418441
>> @cputime[5]: 31032404613
>> @cputime[3]: 33511324691
>> @cputime[1]: 35564174562
>> @cputime[4]: 39313421965
>> @cputime[2]: 55779811909 (net_rx cpu)
>>
>> On 5.10:
>> @cputime[3]: 2150554823
>> @cputime[5]: 3294276626
>> @cputime[7]: 4277890448
>> @cputime[4]: 5094586003
>> @cputime[1]: 6058168291
>> @cputime[0]: 14688093441
>> @cputime[6]: 17578229533
>> @cputime[2]: 223473400411 (net_rx cpu)
>>
>> I also tried setting the cpu affinity of the smbd thread away from the net_rx cpu and indeed that
>> seems to bring the perf on par with old kernel.

I observed the same problem for the past two weeks.

>>
>> I noticed that there's scheduler load_balance rework in 5.5, so I did the test on 5.4 and 5.5 and
>> it did show the behavior changed between 5.4 and 5.5.
> 
> Have you tested v5.18 ? several improvements happened since v5.5
> 
>>
>> Anyone know how to work around this?
> 
> Have you enabled IRQ_TIME_ACCOUNTING ?


CONFIG_IRQ_TIME_ACCOUNTING=y.

> 
> When the time spent under interrupt becomes significant, scheduler
> migrate task on another cpu


My board has two cpus, and i used iperf3 to test upload bandwidth,then I saw the same situation,
the iperf3 thread run on the same cpu as the NET_RX softirq.

After debug in find_busiest_group(), i noticed when the cpu(env->idle is CPU_IDLE or CPU_NEWLY_IDLE) try to pull task,
the busiest->group_type == group_fully_busy, busiest->sum_h_nr_running == 1, local->group_type==group_has_spare,
and the loadbalance will failed at find_busiest_group(), as follows:

find_busiest_group():
    ...
    if (busiest->group_type != group_overloaded) {
	....
	if (busiest->sum_h_nr_running == 1)
		goto out_balanced;     ----> loadbalance will returned at here.
....


Thanks,
Qiao


> Vincent>>
>> Thanks,
>> David
> .
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ