[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <50A686C5.7080103@linux.vnet.ibm.com>
Date: Sat, 17 Nov 2012 00:02:37 +0530
From: "Srivatsa S. Bhat" <srivatsa.bhat@...ux.vnet.ibm.com>
To: Dave Hansen <dave@...ux.vnet.ibm.com>
CC: Mel Gorman <mgorman@...e.de>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
akpm@...ux-foundation.org, mjg59@...f.ucam.org,
paulmck@...ux.vnet.ibm.com, maxime.coquelin@...ricsson.com,
loic.pallardy@...ricsson.com, arjan@...ux.intel.com,
kmpark@...radead.org, kamezawa.hiroyu@...fujitsu.com,
lenb@...nel.org, rjw@...k.pl, gargankita@...il.com,
amit.kachhap@...aro.org, thomas.abraham@...aro.org,
santosh.shilimkar@...com, linux-pm@...r.kernel.org,
linux-mm@...ck.org, linux-kernel@...r.kernel.org,
andi@...stfloor.org,
SrinivasPandruvada <srinivas.pandruvada@...ux.intel.com>
Subject: Re: [RFC PATCH 0/8][Sorted-buddy] mm: Linux VM Infrastructure to
support Memory Power Management
On 11/09/2012 10:22 PM, Srivatsa S. Bhat wrote:
> On 11/09/2012 10:13 PM, Srivatsa S. Bhat wrote:
>> On 11/09/2012 10:04 PM, Srivatsa S. Bhat wrote:
>>> On 11/09/2012 09:43 PM, Dave Hansen wrote:
>>>> On 11/09/2012 07:23 AM, Srivatsa S. Bhat wrote:
>>>>> FWIW, kernbench is actually (and surprisingly) showing a slight performance
>>>>> *improvement* with this patchset, over vanilla 3.7-rc3, as I mentioned in
>>>>> my other email to Dave.
>>>>>
>>>>> https://lkml.org/lkml/2012/11/7/428
>>>>>
>>>>> I don't think I can dismiss it as an experimental error, because I am seeing
>>>>> those results consistently.. I'm trying to find out what's behind that.
>>>>
>>>> The only numbers in that link are in the date. :) Let's see the
>>>> numbers, please.
>>>>
>>>
>>> Sure :) The reason I didn't post the numbers very eagerly was that I didn't
>>> want it to look ridiculous if it later turned out to be really an error in the
>>> experiment ;) But since I have seen it happening consistently I think I can
>>> post the numbers here with some non-zero confidence.
>>>
>>>> If you really have performance improvement to the memory allocator (or
>>>> something else) here, then surely it can be pared out of your patches
>>>> and merged quickly by itself. Those kinds of optimizations are hard to
>>>> come by!
>>>>
>>>
>>> :-)
>>>
>>> Anyway, here it goes:
>>>
>>> Test setup:
>>> ----------
>>> x86 2-socket quad-core machine. (CONFIG_NUMA=n because I figured that my
>>> patchset might not handle NUMA properly). Mem region size = 512 MB.
>>>
>>
>> For CONFIG_NUMA=y on the same machine, the difference between the 2 kernels
>> was much lesser, but nevertheless, this patchset performed better. I wouldn't
>> vouch that my patchset handles NUMA correctly, but here are the numbers from
>> that run anyway (at least to show that I really found the results to be
>> repeatable):
>>
I fixed up the NUMA case (I'll post the updated patch for that soon) and
ran a fresh set of kernbench runs. The difference between mainline and this
patchset is quite tiny; so we can't really say that this patchset shows a
performance improvement over mainline. However, I can safely conclude that
this patchset doesn't show any performance _degradation_ w.r.t mainline
in kernbench.
Results from one of the recent kernbench runs:
---------------------------------------------
Kernbench log for Vanilla 3.7-rc3
=================================
Kernel: 3.7.0-rc3
Average Optimal load -j 32 Run (std deviation):
Elapsed Time 330.39 (0.746257)
User Time 4283.63 (3.39617)
System Time 604.783 (2.72629)
Percent CPU 1479 (3.60555)
Context Switches 845634 (6031.22)
Sleeps 833655 (6652.17)
Kernbench log for Sorted-buddy
==============================
Kernel: 3.7.0-rc3-sorted-buddy
Average Optimal load -j 32 Run (std deviation):
Elapsed Time 329.967 (2.76789)
User Time 4230.02 (2.15324)
System Time 599.793 (1.09988)
Percent CPU 1463.33 (11.3725)
Context Switches 840530 (1646.75)
Sleeps 833732 (2227.68)
Regards,
Srivatsa S. Bhat
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists