lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110422210335.GB21364@elte.hu>
Date:	Fri, 22 Apr 2011 23:03:35 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Stephane Eranian <eranian@...gle.com>
Cc:	Arnaldo Carvalho de Melo <acme@...radead.org>,
	linux-kernel@...r.kernel.org, Andi Kleen <ak@...ux.intel.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Lin Ming <ming.m.lin@...el.com>,
	Arnaldo Carvalho de Melo <acme@...hat.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>, eranian@...il.com,
	Arun Sharma <asharma@...com>,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>
Subject: Re: [generalized cache events] Re: [PATCH 1/1] perf tools: Add
 missing user space support for config1/config2


* Stephane Eranian <eranian@...gle.com> wrote:

> Let's go back to your example.
> Performance counter stats for './array' (10 runs):
> 
>          6,719,130 cycles:u                   ( +-   0.662% )
>          5,084,792 instructions:u           #      0.757 IPC     ( +-   0.000% )
>          1,037,032 l1-dcache-loads:u          ( +-   0.009% )
>          1,003,604 l1-dcache-load-misses:u    ( +-   0.003% )
> 
> Looking at this I don't think you can pinpoint which function has a problem 
> [...]

In my previous mail i showed how to pinpoint specific functions. You bring up 
an interesting question, cost/benefit analysis:

> [...] and whether or not there is a real problem. You need to evaluate the 
> penalty. Once you know that you can estimate any potential gain from fixing 
> the code. Arun pointed that out rightfully in his answer. How do you know the 
> penalty if you don't decompose some more?

We can measure that even with today's tooling - which doesnt do cost/benefit 
analysis out of box. In my previous example i showed the cachemisses profile:

   weight    samples  pcnt funct DSO
   ______    _______ _____ _____ ______________________

      1.9       6184 98.8% func2 /home/mingo/opt/array2
      0.0         69  1.1% func1 /home/mingo/opt/array2

and here's the cycles profile:

             samples  pcnt funct DSO
             _______ _____ _____ ______________________

             2555.00 67.4% func2 /home/mingo/opt/array2
             1220.00 32.2% func1 /home/mingo/opt/array2

So, given that there was no other big miss sources:

 $ perf stat -a -e branch-misses:u -e l1-dcache-load-misses:u -e l1-dcache-store-misses:u -e l1-icache-load-misses:u sleep 1

 Performance counter stats for 'sleep 1':

            70,674 branch-misses:u         
       347,992,027 l1-dcache-load-misses:u 
             1,779 l1-dcache-store-misses:u
             8,007 l1-icache-load-misses:u 

        1.000982021  seconds time elapsed

I can tell you that by fixing the cache-misses in that function, the code will 
be roughly 33% faster.

So i fixed the bug, and before it 100 iterations of func1+func2 took 300 msecs:

 $ perf stat -e cpu-clock --repeat 10 ./array2

 Performance counter stats for './array2' (10 runs):

        298.405074 cpu-clock                  ( +-   1.823% )

After the fix it took 190 msecs:

 $ perf stat -e cpu-clock --repeat 10 ./array2

 Performance counter stats for './array2' (10 runs):

        189.409569 cpu-clock                  ( +-   0.019% )

        0.190007596  seconds time elapsed   ( +-   0.025% )

Which is 63% of the original speed - 37% faster. And no, i first did the 
calculation, then did the measurement of the optimized code.

Now it would be nice to automate such analysis some more within perf - but i 
think i have established the principle well enough that we can use generic 
cache events for such measurements.

Also, we could certainly add more generic events - a stalled-cycles event would 
certainly be useful for example, to collect all (or at least most) 'harmful 
delays' the execution flow can experience. Want to take a stab at that patch?

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ