lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <BANLkTi=XCaHH2D9-=wrBAvf3-WjngBPx8w@mail.gmail.com>
Date:	Thu, 12 May 2011 11:44:48 -0700
From:	Nikhil Rao <ncrao@...gle.com>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Frédéric Weisbecker <fweisbec@...il.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Mike Galbraith <efault@....de>, linux-kernel@...r.kernel.org,
	"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ux.vnet.ibm.com>,
	Stephan Barwolf <stephan.baerwolf@...ilmenau.de>
Subject: Re: [PATCH v1 00/19] Increase resolution of load weights

On Thu, May 12, 2011 at 3:55 AM, Ingo Molnar <mingo@...e.hu> wrote:
>
> * Nikhil Rao <ncrao@...gle.com> wrote:
>> On Tue, May 10, 2011 at 11:59 PM, Ingo Molnar <mingo@...e.hu> wrote:
>> >
>> >From this latest run on -tip, the instruction count is about ~0.28%
>> more and cycles are approx 3.38% more. From the stalled cycles counts,
>> it looks like most of this increase is coming from backend stalled
>> cycles. It's not clear what type of stalls these are, but if I were to
>> guess, I think it means stalls post-decode (i.e. functional units,
>> load/store, etc.). Is that right?
>
> Yeah, more functional work to be done, and probably a tad more expensive per
> extra instruction executed.
>

OK, this might be the shifts we do in c_d_m(). To confirm this, let me
remove the shifts and see if the number of stalled decreases.

> How did branches and branch misses change?
>

It looks like we take slightly more branches and miss more often.
About 0.2% more branches, and miss about 25% more often (i.e. 2.957%
vs. 2.376%).

-tip:
# taskset 8 perf stat --repeat 100 -e instructions -e cycles -e
branches -e branch-misses /root/data/pipe-test-100k

 Performance counter stats for '/root/data/pipe-test-100k' (100 runs):

       906,385,082 instructions             #      0.835 IPC     ( +-   0.077% )
     1,085,517,988 cycles                     ( +-   0.139% )
       165,921,546 branches                   ( +-   0.071% )
         3,941,788 branch-misses            #      2.376 %       ( +-   0.952% )

        1.061813201  seconds time elapsed   ( +-   0.096% )


-tip+patches:
# taskset 8 perf stat --repeat 100 -e instructions -e cycles -e
branches -e branch-misses /root/data/pipe-test-100k

 Performance counter stats for '/root/data/pipe-test-100k' (100 runs):

       908,150,127 instructions             #      0.829 IPC     ( +-   0.073% )
     1,095,344,326 cycles                     ( +-   0.140% )
       166,266,732 branches                   ( +-   0.071% )
         4,917,179 branch-misses            #      2.957 %       ( +-   0.746% )

        1.065221478  seconds time elapsed   ( +-   0.099% )


Comparing two perf records of branch-misses by hand, we see about the
same number of branch-miss events but the distribution looks less
top-heavy compared to -tip, so we might have a longer tail of branch
misses with the patches. None of the scheduler functions really stand
out.

-tip:
# taskset 8 perf record -e branch-misses /root/pipe-test-30m

# perf report | head -n 20
# Events: 310K cycles
#
# Overhead        Command      Shared Object
      Symbol
# ........  .............  .................
.....................................
#
    11.15%  pipe-test-30m  [kernel.kallsyms]  [k] system_call
     7.70%  pipe-test-30m  [kernel.kallsyms]  [k] x86_pmu_disable_all
     6.63%  pipe-test-30m  libc-2.11.1.so     [.] __GI_read
     6.11%  pipe-test-30m  [kernel.kallsyms]  [k] pipe_read
     5.74%  pipe-test-30m  [kernel.kallsyms]  [k] system_call_after_swapgs
     5.60%  pipe-test-30m  pipe-test-30m      [.] main
     5.55%  pipe-test-30m  [kernel.kallsyms]  [k] find_next_bit
     5.55%  pipe-test-30m  [kernel.kallsyms]  [k] __might_sleep
     5.46%  pipe-test-30m  [kernel.kallsyms]  [k] _raw_spin_lock_irqsave
     4.55%  pipe-test-30m  [kernel.kallsyms]  [k] sched_clock
     3.82%  pipe-test-30m  [kernel.kallsyms]  [k] pipe_wait
     3.73%  pipe-test-30m  [kernel.kallsyms]  [k] sys_write
     3.65%  pipe-test-30m  [kernel.kallsyms]  [k] anon_pipe_buf_release
     3.61%  pipe-test-30m  [kernel.kallsyms]  [k] update_curr
     2.75%  pipe-test-30m  [kernel.kallsyms]  [k] select_task_rq_fair

-tip+patches:
# taskset 8 perf record -e branch-misses /root/pipe-test-30m

# perf report | head -n 20
# Events: 314K branch-misses
#
# Overhead        Command      Shared Object
      Symbol
# ........  .............  .................
.....................................
#
     7.66%  pipe-test-30m  [kernel.kallsyms]  [k] __might_sleep
     7.59%  pipe-test-30m  [kernel.kallsyms]  [k] system_call_after_swapgs
     5.88%  pipe-test-30m  [kernel.kallsyms]  [k] kill_fasync
     4.42%  pipe-test-30m  [kernel.kallsyms]  [k] fsnotify
     3.96%  pipe-test-30m  [kernel.kallsyms]  [k] update_curr
     3.93%  pipe-test-30m  [kernel.kallsyms]  [k] system_call
     3.91%  pipe-test-30m  [kernel.kallsyms]  [k] update_stats_wait_end
     3.90%  pipe-test-30m  [kernel.kallsyms]  [k] sys_read
     3.88%  pipe-test-30m  pipe-test-30m      [.] main
     3.86%  pipe-test-30m  [kernel.kallsyms]  [k] select_task_rq_fair
     3.81%  pipe-test-30m  libc-2.11.1.so     [.] __GI_read
     3.73%  pipe-test-30m  [kernel.kallsyms]  [k] sysret_check
     3.70%  pipe-test-30m  [kernel.kallsyms]  [k] sys_write
     3.66%  pipe-test-30m  [kernel.kallsyms]  [k] ret_from_sys_call
     3.56%  pipe-test-30m  [kernel.kallsyms]  [k] fsnotify_access

-Thanks,
Nikhil
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ