lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140516143002.GU50500@redhat.com>
Date:	Fri, 16 May 2014 10:30:02 -0400
From:	Don Zickus <dzickus@...hat.com>
To:	Jiri Olsa <jolsa@...hat.com>
Cc:	acme@...stprotocols.net, peterz@...radead.org,
	LKML <linux-kernel@...r.kernel.org>, namhyung@...il.com,
	eranian@...gle.com, Andi Kleen <andi@...stfloor.org>
Subject: Re: [PATCH 6/6] perf: Add dcacheline sort

On Fri, May 16, 2014 at 04:05:51PM +0200, Jiri Olsa wrote:
> On Fri, May 16, 2014 at 09:30:58AM -0400, Don Zickus wrote:
> > On Fri, May 16, 2014 at 01:47:57PM +0200, Jiri Olsa wrote:
> > > On Tue, May 13, 2014 at 12:48:17PM -0400, Don Zickus wrote:
> > > > In perf's 'mem-mode', one can get access to a whole bunch of details specific to a
> > > > particular sample instruction.  A bunch of those details relate to the data
> > > > address.
> > > > 
> > > > One interesting thing you can do with data addresses is to convert them into a unique
> > > > cacheline they belong too.  Organizing these data cachelines into similar groups and sorting
> > > > them can reveal cache contention.
> > > > 
> > > > This patch creates an alogorithm based on various sample details that can help group
> > > > entries together into data cachelines and allows 'perf report' to sort on it.
> > > > 
> > > > The algorithm relies on having proper mmap2 support in the kernel to help determine
> > > > if the memory map the data address belongs to is private to a pid or globally shared.
> > > > 
> > > > The alogortithm is as follows:
> > > > 
> > > > o group cpumodes together
> > > > o group entries with discovered maps together
> > > > o sort on major, minor, inode and inode generation numbers
> > > > o if userspace anon, then sort on pid
> > > > o sort on cachelines based on data addresses
> > > 
> > > needs some collumn width refresh or something..? ;-)
> > 
> > Not sure what you mean here.
> > 
> > > 
> > > # Overhead  Data Cacheline         
> > > # ........  .......................
> 
> header not being wide enough to cover the longest data

Ah. Ok.  So I am not sure the right way to fix that.  As the current
header seems to be hardcoded with a bunch of spaces.  Is there a trick to
dynamically space it correctly based on the data provided?

> 
> 
> > > #
> > >      5.42%  [k] 0xffff8801ed832c40 
> > >      5.29%  [.] sys_errlist@@GLIBC_2.12+0xffffffcbf7dfc1ff                       
> > >      3.16%  [k] 0xffffffffff5690c0 
> > > 
> > > 
> > > also I've got again perf hanged up on opening device file
> > > 
> > > [jolsa@...va perf]$ sudo strace -p 29445
> > > Process 29445 attached
> > > open("/dev/snd/pcmC0D0p", O_RDONLY^CProcess 29445 detached
> > > 
> > > another one I recall was /dev/dri/card0 touched by X server
> > > 
> > > I guess those device files allow to mmap memory and we recorded
> > > memory access there.. we need check for this and do not try to
> > > open device files
> > 
> > Ok.  And that problem doesn't happen when my patch is not applied?  I am
> > not sure how this patch causes open device hangs.  I'll try to run this on
> > a box with X server running to duplicate.
> 
> I think it came with the memory profiling, because we treat
> data areas as dsos.. open and look for symbols

Yeah, I figured that too.  I guess I was trying to point out this is a
generic memory profiling issue that isn't related to my patch.  But I will
still try to track down the problem as it needs to be fixed. :-)

Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ