lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 5 Nov 2018 21:09:24 +0100
From:   Vlastimil Babka <vbabka@...e.cz>
To:     Linus Torvalds <torvalds@...ux-foundation.org>,
        rong.a.chen@...el.com
Cc:     yang.shi@...ux.alibaba.com, kirill.shutemov@...ux.intel.com,
        mhocko@...nel.org, willy@...radead.org, ldufour@...ux.vnet.ibm.com,
        Colin King <colin.king@...onical.com>,
        Andrew Morton <akpm@...ux-foundation.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        lkp@...org
Subject: Re: [LKP] [mm] 9bc8039e71: will-it-scale.per_thread_ops -64.1%
 regression

On 11/5/18 6:50 PM, Linus Torvalds wrote:
> On Sun, Nov 4, 2018 at 9:08 PM kernel test robot <rong.a.chen@...el.com> wrote:
>>
>> FYI, we noticed a -64.1% regression of will-it-scale.per_thread_ops
>> due to commit 9bc8039e715d ("mm: brk: downgrade mmap_sem to read when
>> shrinking")
> 
> Ugh. That looks pretty bad.
> 
>> in testcase: will-it-scale
>> on test machine: 8 threads Ivy Bridge with 16G memory
>> with following parameters:
>>
>>         nr_task: 100%
>>         mode: thread
>>         test: brk1
>>         ucode: 0x20
>>         cpufreq_governor: performance
> 
> The reason seems to be way more scheduler time due to lots more
> context switches:
> 
>>   34925294 ± 18%    +270.3%  1.293e+08 ±  4%  will-it-scale.time.voluntary_context_switches

And what about this:

      0.83 ± 27%     +25.9       26.75 ± 11%  perf-profile.calltrace.cycles-pp.intel_idle.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary
      1.09 ± 32%     +30.9       31.97 ± 10%  perf-profile.calltrace.cycles-pp.cpuidle_enter_state.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
      1.62 ± 36%     +44.4       46.01 ±  9%  perf-profile.calltrace.cycles-pp.do_idle.cpu_startup_entry.start_secondary.secondary_startup_64
      1.63 ± 36%     +44.5       46.18 ±  9%  perf-profile.calltrace.cycles-pp.cpu_startup_entry.start_secondary.secondary_startup_64
      1.63 ± 36%     +44.6       46.21 ±  9%  perf-profile.calltrace.cycles-pp.start_secondary.secondary_startup_64
      1.73 ± 29%     +51.1       52.86 ±  2%  perf-profile.calltrace.cycles-pp.secondary_startup_64

And the graphs showing less user/kernel time and less "percent_of_cpu_this_job_got"...

I didn't spot an obvious mistake in the patch itself, so it looks
like some bad interaction between scheduler and the mmap downgrade?
 
> Yang Shi, would you mind taking a look at what's going on?
> 
>               Linus
> 

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ