[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1364891674.4976.65.camel@marge.simpson.net>
Date: Tue, 02 Apr 2013 10:34:34 +0200
From: Mike Galbraith <efault@....de>
To: Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc: Alex Shi <alex.shi@...el.com>, mingo@...hat.com,
peterz@...radead.org, tglx@...utronix.de,
akpm@...ux-foundation.org, arjan@...ux.intel.com, bp@...en8.de,
pjt@...gle.com, namhyung@...nel.org, morten.rasmussen@....com,
vincent.guittot@...aro.org, gregkh@...uxfoundation.org,
preeti@...ux.vnet.ibm.com, viresh.kumar@...aro.org,
linux-kernel@...r.kernel.org, len.brown@...el.com,
rafael.j.wysocki@...el.com, jkosina@...e.cz,
clark.williams@...il.com, tony.luck@...el.com,
keescook@...omium.org, mgorman@...e.de, riel@...hat.com
Subject: Re: [patch v3 0/8] sched: use runnable avg in load balance
On Tue, 2013-04-02 at 15:23 +0800, Michael Wang wrote:
> On 04/02/2013 11:23 AM, Alex Shi wrote:
> [snip]
> >
> > [patch v3 1/8] Revert "sched: Introduce temporary FAIR_GROUP_SCHED
> > [patch v3 2/8] sched: set initial value of runnable avg for new
> > [patch v3 3/8] sched: only count runnable avg on cfs_rq's nr_running
> > [patch v3 4/8] sched: update cpu load after task_tick.
> > [patch v3 5/8] sched: compute runnable load avg in cpu_load and
> > [patch v3 6/8] sched: consider runnable load average in move_tasks
> > [patch v3 7/8] sched: consider runnable load average in
> > [patch v3 8/8] sched: use instant load for burst wake up
>
> I've tested the patch set on 12 cpu X86 box with 3.9.0-rc2, and pgbench
> show regression on high-end this time.
>
> | db_size | clients | tps | | tps |
> +---------+---------+-------+ +-------+
> | 22 MB | 1 | 10662 | | 10446 |
> | 22 MB | 2 | 21483 | | 20887 |
> | 22 MB | 4 | 42046 | | 41266 |
> | 22 MB | 8 | 55807 | | 51987 |
> | 22 MB | 12 | 50768 | | 50974 |
> | 22 MB | 16 | 49880 | | 49510 |
> | 22 MB | 24 | 45904 | | 42398 |
> | 22 MB | 32 | 43420 | | 40995 |
> | 7484 MB | 1 | 7965 | | 7376 |
> | 7484 MB | 2 | 19354 | | 19149 |
> | 7484 MB | 4 | 37552 | | 37458 |
> | 7484 MB | 8 | 48655 | | 46618 |
> | 7484 MB | 12 | 45778 | | 45756 |
> | 7484 MB | 16 | 45659 | | 44911 |
> | 7484 MB | 24 | 42192 | | 37185 | -11.87%
> | 7484 MB | 32 | 36385 | | 34447 |
> | 15 GB | 1 | 7677 | | 7359 |
> | 15 GB | 2 | 19227 | | 19049 |
> | 15 GB | 4 | 37335 | | 36947 |
> | 15 GB | 8 | 48130 | | 46898 |
> | 15 GB | 12 | 45393 | | 43986 |
> | 15 GB | 16 | 45110 | | 45719 |
> | 15 GB | 24 | 41415 | | 36813 | -11.11%
> | 15 GB | 32 | 35988 | | 34025 |
>
> The reason may caused by wake_affine()'s higher overhead, and pgbench is
> really sensitive to this stuff...
For grins, you could try running the whole thing SCHED_BATCH. (/me sees
singing/dancing red herring whenever wake_affine() and pgbench appear in
the same sentence;)
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists