lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5512A88A.1010608@redhat.com>
Date:	Wed, 25 Mar 2015 08:22:34 -0400
From:	Joe Mario <jmario@...hat.com>
To:	David Ahern <dsahern@...il.com>, Don Zickus <dzickus@...hat.com>
CC:	acme@...nel.org, linux-kernel@...r.kernel.org,
	Jiri Olsa <jolsa@...hat.com>
Subject: Re: [PATCH] perf tool: Fix ppid for synthesized fork events

On 03/24/2015 05:12 PM, David Ahern wrote:
> On 3/24/15 2:10 PM, Don Zickus wrote:
>> He does this with and without the patch.  The difference is usually over 50%
>> extra time with the patch for both the record timings and report timings.:-(
>
> I find that shocking. The patch only populates ppid and ptid with a value read from the file that is already opened and processed. Most of the patch is just plumbing the value from the low level function that processes the status file back to synthesize_fork.
>
> What benchmark is this? Is it something I can download and run? if so, details? site, command, args?
>
> Thanks,
> David

Hi David:

We ran "time perf mem record -a -e cpu/mem-loads,ldlat=50/pp -e cpu/mem-stores/pp sleep 10" on a system that was running SPECjbb2013 in the background.  There were about 10,000 java threads with about 500 to 800 in a runnable state at any given time.   We ran it on a 4 socket x86 IVB server.

We had two perf binaries.  One with your patch and one without it.  Because the benchmark doesn't always have a constant load, we ran the above perf command in a loop alternating between the patched and unpatched version.   The elapsed wall clock times ("real" field from time) for the perf with your patch was typically >= 50% longer than the equivalent unpatched perf.

We can't give out a copy of the SPEC benchmark (part of the SPEC agreement).  Try to find some application (or create your own) to load the system with lots of busy threads.

Joe
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ