[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1365669862.19620.129.camel@marge.simpson.net>
Date: Thu, 11 Apr 2013 10:44:22 +0200
From: Mike Galbraith <efault@....de>
To: Michael Wang <wangyun@...ux.vnet.ibm.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
Peter Zijlstra <a.p.zijlstra@...llo.nl>,
LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...nel.org>, Alex Shi <alex.shi@...el.com>,
Namhyung Kim <namhyung@...nel.org>,
Paul Turner <pjt@...gle.com>,
Andrew Morton <akpm@...ux-foundation.org>,
"Nikunj A. Dadhania" <nikunj@...ux.vnet.ibm.com>,
Ram Pai <linuxram@...ibm.com>
Subject: Re: [PATCH] sched: wake-affine throttle
On Thu, 2013-04-11 at 16:26 +0800, Michael Wang wrote:
> The 1:N is a good reason to explain why the chance that wakee's hot data
> cached on curr_cpu is lower, and since it's just 'lower' not 'extinct',
> after the throttle interval large enough, it will be balanced, this
> could be proved, since during my test, when the interval become too big,
> the improvement start to drop.
Magnitude of improvement drops just because there's less damage done
methinks. You'll eventually run out of measurable damage :)
Yes, it's not really extinct, you _can_ reap a gain, it's just not at
all likely to work out. A more symetric load will fare better, but any
1:N thing just has to spread far and wide to have any chance to perform.
> Hmm...that's an interesting point, the workload contain different
> 'priority' works, and depend on each other, if mother starving, all the
> kids could do nothing but wait for her, may be that's the reason why the
> benefit is so significant, since in such case, mother's little quicker
> respond will make all the kids happy :)
Exactly. The entire load is server latency bound. Keep the server on
cpu, the load performs as best it can given unavoidable data miss cost.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists