lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <a66f8f04-7ccc-d72a-4905-f21347c1f8e1@intel.com>
Date:   Fri, 19 Jul 2019 10:05:01 -0700
From:   Dave Hansen <dave.hansen@...el.com>
To:     Nadav Amit <namit@...are.com>,
        kernel test robot <rong.a.chen@...el.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        Andy Lutomirski <luto@...capital.net>,
        Rick Edgecombe <rick.p.edgecombe@...el.com>,
        Peter Zijlstra <peterz@...radead.org>,
        Andy Lutomirski <luto@...nel.org>,
        Borislav Petkov <bp@...en8.de>,
        "H. Peter Anvin" <hpa@...or.com>, Jessica Yu <jeyu@...nel.org>,
        Kees Cook <keescook@...omium.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Masami Hiramatsu <mhiramat@...nel.org>,
        Rik van Riel <riel@...riel.com>,
        LKML <linux-kernel@...r.kernel.org>, "lkp@...org" <lkp@...org>
Subject: Re: [x86/modules] f2c65fb322: will-it-scale.per_process_ops -2.9%
 regression

On 7/18/19 10:59 AM, Nadav Amit wrote:
> I don’t understand how this patch has any impact on this workload.
> 
> I ran it and set a function tracer for any function that is impacted by this
> patch:
> 
>   # cd /sys/kernel/debug/tracing
>   # echo text_poke_early > set_ftrace_filter
>   # echo module_alloc >> set_ftrace_filter
>   # echo bpf_int_jit_compile >> set_ftrace_filter
>   # tail -f trace
> 
> Nothing came up. Can you please check if you see any of them invoked on your
> setup? Perhaps you have some bpf filters being installed, although even then
> this is a one-time (small) overhead for each process invocation.

I think the direct map's structure changed.  I noticed the following the
0day data:

> 7298e24f904224fa f2c65fb3221adc6b73b0549fc7b 
> ---------------- --------------------------- 
>        fail:runs  %reproduction    fail:runs
>            |             |             |    
>            :4           50%           2:4     dmesg.WARNING:at#for_ip_interrupt_entry/0x
>          %stddev     %change         %stddev
>              \          |                \  
>   10541056 ±  7%     +12.1%   11820032        meminfo.DirectMap2>  2.909e+08           -12.0%   2.56e+08 ±  2%  perf-stat.i.iTLB-load-misses
>       1876           +10.4%       2071 ±  2%  perf-stat.i.instructions-per-iTLB-miss
>       1872           +10.4%       2068 ±  2%  perf-stat.overall.instructions-per-iTLB-miss
>  2.899e+08           -12.0%  2.551e+08 ±  2%  perf-stat.ps.iTLB-load-misses

So something with the direct map changed as did iTLB miss behavior.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ