lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 13 Mar 2014 16:03:52 -0400
From:	Don Zickus <dzickus@...hat.com>
To:	peterz@...radead.org
Cc:	eranian@...gle.com, jmario@...hat.com, jolsa@...hat.com,
	acme@...hat.com, linux-kernel@...r.kernel.org, lwoodman@...hat.com
Subject: perf MMAP2 interface and COW faults

Hi Peter,

So we found another corner case with MMAP2 interface.  I don't think it is
a big hurdle to overcome, just wanted a suggestion.

Joe ran specjbb2013 (which creates about 10,000 java threads across 9
processes) and our c2c tool turned up some cacheline collision data on
libjvm.so.  This didn't make sense because you shouldn't be able to write
to a shared library.

Even worse, our tool said it affected all the java process and a majority
of the threads.  Which again didn't make sense because this shared library
should be local to each pid's memory.

Anyway, what we determined is that the shared library had mmap data that
was non-zero (because it was backed by a file, libjvm.so).  So the
assumption was if the major, minor, inode and inode generation numbers
were non-zero, this memory segment was shared across processes.

So perf setup its map files for the mmap area and then started sampling data
addresses.  A few hundred HITMs were to a virtual address that fell into
the libjvm.so memory segment (which was assumed to be mmap'd across
processes).

Coalescing all the data suggested that multiple pids/tids were contending
for a cacheline in a shared library.

After talking with Larry Woodman, we realized when you write to a 'data' or
'bss' segment of a shared library, you incur a COW fault that maps to an
anonymous page in the pid's memory.  However, perf doesn't see this.

So when all the tids start writing to this 'data' or 'bss' segment they
generate HITMs within their pid (which is fine).  However the tool thinks
it affects other pids (which is not fine).

My question is, how can our tool determine if a virtual address is private
to a pid or not?  Originally it had to have a zero for maj, min, ino, and
ino gen.  But for file map'd libraries this doesn't always work because we
don't see COW faults in perf (and we may not want too :-) ).

Is there another technique we can use?  Perhaps during the reading of
/proc/<pid>/maps, if the protection is marked 'p' for private, we just tell
the sort algorithm to sort locally to the process but a 's' for shared can
be sorted globally based on data addresses?

Or something else that tells us that a virtual address has changed its
mapping?  Thoughts?

Cheers,
Don
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ