[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1284197541.2251.23.camel@laptop>
Date: Sat, 11 Sep 2010 11:32:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Venkatesh Pallipadi <venki@...gle.com>
Cc: Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
Suresh Siddha <suresh.b.siddha@...el.com>,
Mike Galbraith <efault@....de>,
Gregory Haskins <ghaskins@...ell.com>
Subject: Re: [PATCH] sched: Increment cache_nice_tries only on periodic lb
On Fri, 2010-09-10 at 18:19 -0700, Venkatesh Pallipadi wrote:
> scheduler uses cache_nice_tries as an indicator to do cache_hot and active
> load balance, when normal load balance fails. Currently, this value is changed
> on any failed load balance attempt. That ends up being not so nice to workloads
> that enter/exit idle often, as they do more frequent new_idle balance and
> that pretty soon results in cache hot tasks being pulled in.
>
> Making the cache_nice_tries ignore failed new_idle balance seems to make
> better sense. With that only the failed load balance in periodic load balance
> gets accounted and the rate of accumulation of cache_nice_tries will not
> depend on idle entry/exit (short running sleep-wakeup kind of tasks). This
> reduces movement of cache_hot tasks.
Seems to make sense..
I've also wondered if it would make sense to restore 0437e109e (sched:
zap the migration init / cache-hot balancing code), esp because what the
comment says isn't actually true anymore, we don't use the tree for
load-balancing.
But even if we were, the left size of the tree isn't the cold side, nor
would I guess the right side be.. tricky stuff that.
I know Gregory Haskins has played with restoring it, and I think he
found some benefit from it, although he didn't pursue it, it might be
worth seeing if it does for your workloads.
I've queued the patch, thanks!
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists