[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1404359462.5137.72.camel@marge.simpson.net>
Date: Thu, 03 Jul 2014 05:51:02 +0200
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Rik van Riel <riel@...hat.com>
Cc: Michael wang <wangyun@...ux.vnet.ibm.com>,
Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>, Alex Shi <alex.shi@...aro.org>,
Paul Turner <pjt@...gle.com>, Mel Gorman <mgorman@...e.de>,
Daniel Lezcano <daniel.lezcano@...aro.org>,
LKML <linux-kernel@...r.kernel.org>
Subject: Re: [PATCH] sched: select 'idle' cfs_rq per task-group to prevent
tg-internal imbalance
On Wed, 2014-07-02 at 10:47 -0400, Rik van Riel wrote:
> On 07/01/2014 04:38 AM, Michael wang wrote:
> > On 07/01/2014 04:20 PM, Peter Zijlstra wrote:
> > [snip]
> >>>
> >>> Just wondering could we make this another scheduler feature?
> >>
> >> No; sched_feat() is for debugging, BIG CLUE: its guarded by
> >> CONFIG_SCHED_DEBUG, anybody using it in production or anywhere else is
> >> broken.
> >>
> >> If people are using it, I should remove or at least randomize the
> >> interface.
> >
> > Fair enough... but is there any suggestions on how to handle this issue?
> >
> > Currently when dbench running with stress, it could only gain one CPU,
> > and cpu-cgroup cpu.shares is meaningless, is there any good methods to
> > address that?
>
> select_idle_sibling will iterate over all of the CPUs
> in an LLC domain if there is no idle cpu in the domain.
>
> I suspect it would not take much extra code to track
> down the idlest CPU in the LLC domain, and make sure to
> schedule tasks there, in case no completely idle CPU
> was found.
>
> Are there any major problems with that thinking?
That's full wake balance.. if that was cheap, select_idle_sibling()
would not exist.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists