[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <509D32C2.2090104@linux.vnet.ibm.com>
Date: Fri, 09 Nov 2012 22:13:46 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
CC: Mel Gorman <mgorman@...e.de>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
akpm@...ux-foundation.org, mjg59@...f.ucam.org,
paulmck@...ux.vnet.ibm.com, maxime.coquelin@...ricsson.com,
loic.pallardy@...ricsson.com, arjan@...ux.intel.com,
kmpark@...radead.org, kamezawa.hiroyu@...fujitsu.com,
lenb@...nel.org, rjw@...k.pl, gargankita@...il.com,
amit.kachhap@...aro.org, thomas.abraham@...aro.org,
santosh.shilimkar@...com, linux-pm@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org
Subject: Re: [RFC PATCH 0/8][Sorted-buddy] mm: Linux VM Infrastructure to
support Memory Power Management
On 11/09/2012 10:04 PM, Srivatsa S. Bhat wrote:
> On 11/09/2012 09:43 PM, Dave Hansen wrote:
>> On 11/09/2012 07:23 AM, Srivatsa S. Bhat wrote:
>>> FWIW, kernbench is actually (and surprisingly) showing a slight performance
>>> *improvement* with this patchset, over vanilla 3.7-rc3, as I mentioned in
>>> my other email to Dave.
>>>
>>> https://lkml.org/lkml/2012/11/7/428
>>>
>>> I don't think I can dismiss it as an experimental error, because I am seeing
>>> those results consistently.. I'm trying to find out what's behind that.
>>
>> The only numbers in that link are in the date. :) Let's see the
>> numbers, please.
>>
>
> Sure :) The reason I didn't post the numbers very eagerly was that I didn't
> want it to look ridiculous if it later turned out to be really an error in the
> experiment ;) But since I have seen it happening consistently I think I can
> post the numbers here with some non-zero confidence.
>
>> If you really have performance improvement to the memory allocator (or
>> something else) here, then surely it can be pared out of your patches
>> and merged quickly by itself. Those kinds of optimizations are hard to
>> come by!
>>
>
> :-)
>
> Anyway, here it goes:
>
> Test setup:
> ----------
> x86 2-socket quad-core machine. (CONFIG_NUMA=n because I figured that my
> patchset might not handle NUMA properly). Mem region size = 512 MB.
>
For CONFIG_NUMA=y on the same machine, the difference between the 2 kernels
was much lesser, but nevertheless, this patchset performed better. I wouldn't
vouch that my patchset handles NUMA correctly, but here are the numbers from
that run anyway (at least to show that I really found the results to be
repeatable):
Kernbench log for Vanilla 3.7-rc3
=================================
Kernel: 3.7.0-rc3-vanilla-numa-default
Average Optimal load -j 32 Run (std deviation):
Elapsed Time 589.058 (0.596171)
User Time 7461.26 (1.69702)
System Time 1072.03 (1.54704)
Percent CPU 1448.2 (1.30384)
Context Switches 2.14322e+06 (4042.97)
Sleeps 1847230 (2614.96)
Kernbench log for Vanilla 3.7-rc3
=================================
Kernel: 3.7.0-rc3-sorted-buddy-numa-default
Average Optimal load -j 32 Run (std deviation):
Elapsed Time 577.182 (0.713772)
User Time 7315.43 (3.87226)
System Time 1043 (1.12855)
Percent CPU 1447.6 (2.19089)
Context Switches 2117022 (3810.15)
Sleeps 1.82966e+06 (4149.82)
Regards,
Srivatsa S. Bhat
> Kernbench log for Vanilla 3.7-rc3
> =================================
>
> Kernel: 3.7.0-rc3-vanilla-default
> Average Optimal load -j 32 Run (std deviation):
> Elapsed Time 650.742 (2.49774)
> User Time 8213.08 (17.6347)
> System Time 1273.91 (6.00643)
> Percent CPU 1457.4 (3.64692)
> Context Switches 2250203 (3846.61)
> Sleeps 1.8781e+06 (5310.33)
>
> Kernbench log for this sorted-buddy patchset
> ============================================
>
> Kernel: 3.7.0-rc3-sorted-buddy-default
> Average Optimal load -j 32 Run (std deviation):
> Elapsed Time 591.696 (0.660969)
> User Time 7511.97 (1.08313)
> System Time 1062.99 (1.1109)
> Percent CPU 1448.6 (1.94936)
> Context Switches 2.1496e+06 (3507.12)
> Sleeps 1.84305e+06 (3092.67)
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists