[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <515A9859.6000606@intel.com>
Date: Tue, 02 Apr 2013 16:35:37 +0800
From: Alex Shi <alex.shi@...el.com>
To: Michael Wang <wangyun@...ux.vnet.ibm.com>
CC: mingo@...hat.com, peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, efault@....de,
morten.rasmussen@....com, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org,
len.brown@...el.com, rafael.j.wysocki@...el.com, jkosina@...e.cz,
clark.williams@...il.com, tony.luck@...el.com,
keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v3 0/8] sched: use runnable avg in load balance
On 04/02/2013 03:23 PM, Michael Wang wrote:
>> > [patch v3 8/8] sched: use instant load for burst wake up
> I've tested the patch set on 12 cpu X86 box with 3.9.0-rc2, and pgbench
> show regression on high-end this time.
>
> | db_size | clients | tps | | tps |
> +---------+---------+-------+ +-------+
> | 22 MB | 1 | 10662 | | 10446 |
> | 22 MB | 2 | 21483 | | 20887 |
> | 22 MB | 4 | 42046 | | 41266 |
> | 22 MB | 8 | 55807 | | 51987 |
> | 22 MB | 12 | 50768 | | 50974 |
> | 22 MB | 16 | 49880 | | 49510 |
> | 22 MB | 24 | 45904 | | 42398 |
> | 22 MB | 32 | 43420 | | 40995 |
> | 7484 MB | 1 | 7965 | | 7376 |
> | 7484 MB | 2 | 19354 | | 19149 |
> | 7484 MB | 4 | 37552 | | 37458 |
> | 7484 MB | 8 | 48655 | | 46618 |
> | 7484 MB | 12 | 45778 | | 45756 |
> | 7484 MB | 16 | 45659 | | 44911 |
> | 7484 MB | 24 | 42192 | | 37185 | -11.87%
> | 7484 MB | 32 | 36385 | | 34447 |
> | 15 GB | 1 | 7677 | | 7359 |
> | 15 GB | 2 | 19227 | | 19049 |
> | 15 GB | 4 | 37335 | | 36947 |
> | 15 GB | 8 | 48130 | | 46898 |
> | 15 GB | 12 | 45393 | | 43986 |
> | 15 GB | 16 | 45110 | | 45719 |
> | 15 GB | 24 | 41415 | | 36813 | -11.11%
> | 15 GB | 32 | 35988 | | 34025 |
>
> The reason may caused by wake_affine()'s higher overhead, and pgbench is
> really sensitive to this stuff...
Thanks for testing. Could you like to remove the last patch and test it
again? I want to know if the last patch has effect on pgbench.
--
Thanks Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists