lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20091210185459.GA8697@elte.hu>
Date:	Thu, 10 Dec 2009 19:54:59 +0100
From:	Ingo Molnar <mingo@...e.hu>
To:	Christoph Lameter <cl@...ux-foundation.org>
Cc:	KAMEZAWA Hiroyuki <kamezawa.hiroyu@...fujitsu.com>,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	"linux-mm@...ck.org" <linux-mm@...ck.org>,
	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	"akpm@...ux-foundation.org" <akpm@...ux-foundation.org>,
	minchan.kim@...il.com
Subject: Re: [RFC mm][PATCH 2/5] percpu cached mm counter


* Christoph Lameter <cl@...ux-foundation.org> wrote:

> On Thu, 10 Dec 2009, Ingo Molnar wrote:
> 
> >
> > * Christoph Lameter <cl@...ux-foundation.org> wrote:
> >
> > > On Thu, 10 Dec 2009, Ingo Molnar wrote:
> > >
> > > > No, i'm not suggesting that - i'm just suggesting that right now 
> > > > MM stats are not very well suited to be exposed via perf. If we 
> > > > wanted to measure/sample the information in /proc/<pid>/statm it 
> > > > just wouldnt be possible. We have a few events like pagefaults 
> > > > and a few tracepoints as well - but more would be possible IMO.
> > >
> > > vital MM stats are exposed via /proc/<pid> interfaces. Performance 
> > > monitoring is something optional MM VM stats are used for VM 
> > > decision on memory and process handling.
> >
> > You list a few facts here but what is your point?
> 
> The stats are exposed already in a well defined way. [...]

They are exposed in a well defined but limited way: you cannot profile 
based on those stats, you cannot measure them across a workload 
transparently at precise task boundaries and you cannot trace based on 
those stats.

For example, just via the simple page fault events we can today do 
things like:

 aldebaran:~> perf stat -e minor-faults /bin/bash -c "echo hello"
 hello

  Performance counter stats for '/bin/bash -c echo hello':

             292  minor-faults            

     0.000884744  seconds time elapsed

 aldebaran:~> perf record -e minor-faults -c 1 -f -g firefox                  
 Error: cannot open display: :0
 [ perf record: Woken up 3 times to write data ]
 [ perf record: Captured and wrote 0.324 MB perf.data (~14135 samples) ]

 aldebaran:~> perf report
 no symbols found in /bin/sed, maybe install a debug package?
 # Samples: 5312
 #
 # Overhead         Command                             Shared Object  Symbol
 # ........  ..............  ........................................  ......
 #
     12.54%         firefox  ld-2.10.90.so                             
 [.] _dl_relocate_object
                   |
                   --- _dl_relocate_object
                       dl_open_worker
                       _dl_catch_error
                       dlopen_doit
                       0x7fffdf8c6562
                       0x68733d54524f5053

     4.95%         firefox  libc-2.10.90.so                           
 [.] __GI_memset
                   |
                   --- __GI_memset
 ...

I.e. 12.54% of the pagefaults in the firefox startup occur in 
dlopen_doit()->_dl_catch_error()->dl_open_worker()->_dl_relocate_object()-> 
_dl_relocate_object() call path. 4.95% happen in __GI_memset() - etc.

> [...] Exposing via perf is outside of the scope of his work.

Please make thoughts about intelligent instrumentation solutions, and 
please think "outside of the scope" of your usual routine.

Thanks,

	Ingo
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ