lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 09 Nov 2012 22:22:42 +0530
From:	"Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To:	Dave Hansen <dave@...ux.vnet.ibm.com>
CC:	Mel Gorman <mgorman@...e.de>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	akpm@...ux-foundation.org, mjg59@...f.ucam.org,
	paulmck@...ux.vnet.ibm.com, maxime.coquelin@...ricsson.com,
	loic.pallardy@...ricsson.com, arjan@...ux.intel.com,
	kmpark@...radead.org, kamezawa.hiroyu@...fujitsu.com,
	lenb@...nel.org, rjw@...k.pl, gargankita@...il.com,
	amit.kachhap@...aro.org, thomas.abraham@...aro.org,
	santosh.shilimkar@...com, linux-pm@...r.kernel.org,
	linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/8][Sorted-buddy] mm: Linux VM Infrastructure to
 support Memory Power Management

On 11/09/2012 10:13 PM, Srivatsa S. Bhat wrote:
> On 11/09/2012 10:04 PM, Srivatsa S. Bhat wrote:
>> On 11/09/2012 09:43 PM, Dave Hansen wrote:
>>> On 11/09/2012 07:23 AM, Srivatsa S. Bhat wrote:
>>>> FWIW, kernbench is actually (and surprisingly) showing a slight performance
>>>> *improvement* with this patchset, over vanilla 3.7-rc3, as I mentioned in
>>>> my other email to Dave.
>>>>
>>>> https://lkml.org/lkml/2012/11/7/428
>>>>
>>>> I don't think I can dismiss it as an experimental error, because I am seeing
>>>> those results consistently.. I'm trying to find out what's behind that.
>>>
>>> The only numbers in that link are in the date. :)  Let's see the
>>> numbers, please.
>>>
>>
>> Sure :) The reason I didn't post the numbers very eagerly was that I didn't
>> want it to look ridiculous if it later turned out to be really an error in the
>> experiment ;) But since I have seen it happening consistently I think I can
>> post the numbers here with some non-zero confidence.
>>
>>> If you really have performance improvement to the memory allocator (or
>>> something else) here, then surely it can be pared out of your patches
>>> and merged quickly by itself.  Those kinds of optimizations are hard to
>>> come by!
>>>
>>
>> :-)
>>
>> Anyway, here it goes:
>>
>> Test setup:
>> ----------
>> x86 2-socket quad-core machine. (CONFIG_NUMA=n because I figured that my
>> patchset might not handle NUMA properly). Mem region size = 512 MB.
>>
> 
> For CONFIG_NUMA=y on the same machine, the difference between the 2 kernels
> was much lesser, but nevertheless, this patchset performed better. I wouldn't
> vouch that my patchset handles NUMA correctly, but here are the numbers from
> that run anyway (at least to show that I really found the results to be
> repeatable):
> 
> Kernbench log for Vanilla 3.7-rc3
> =================================
> Kernel: 3.7.0-rc3-vanilla-numa-default
> Average Optimal load -j 32 Run (std deviation):
> Elapsed Time 589.058 (0.596171)
> User Time 7461.26 (1.69702)
> System Time 1072.03 (1.54704)
> Percent CPU 1448.2 (1.30384)
> Context Switches 2.14322e+06 (4042.97)
> Sleeps 1847230 (2614.96)
> 
> Kernbench log for Vanilla 3.7-rc3
> =================================

Oops, that title must have been "for sorted-buddy patchset" of course..

> Kernel: 3.7.0-rc3-sorted-buddy-numa-default
> Average Optimal load -j 32 Run (std deviation):
> Elapsed Time 577.182 (0.713772)
> User Time 7315.43 (3.87226)
> System Time 1043 (1.12855)
> Percent CPU 1447.6 (2.19089)
> Context Switches 2117022 (3810.15)
> Sleeps 1.82966e+06 (4149.82)
> 
> 

Regards,
Srivatsa S. Bhat

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ