[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1395896233.5512.45.camel@marge.simpson.net>
Date: Thu, 27 Mar 2014 05:57:13 +0100
From: Mike Galbraith <umgwanakikbuti@...il.com>
To: Yuyang du <yuyang.du@...el.com>
Cc: peterz@...radead.org, mingo@...hat.com,
linux-kernel@...r.kernel.org, linux-pm@...r.kernel.org,
morten.rasmussen@....com, arjan.van.de.ven@...el.com,
len.brown@...el.com, rafael.j.wysocki@...el.com, alan.cox@...el.com
Subject: Re: [RFC II] Splitting scheduler into two halves
On Thu, 2014-03-27 at 02:37 +0800, Yuyang du wrote:
> Hi all,
>
> This is continued after the first RFC about splitting the scheduler. Still
> work-in-progress, and call for feedback.
>
> The question addressed here is how load balance should be changed. And I think
> the question then goes to how to *reuse* common code as much as possible and
> meanwhile be able to serve various objectives.
>
> So these are the basic semantics needed in current load balance:
I'll probably regret it, but I'm gonna speak my mind. I think this two
halves concept is fundamentally broken.
> 1. [ At balance point ] on this_cpu push task on that_cpu to [ third_cpu ]
Load balancing is a necessary part of the fastpath as well as slow path,
you can't just define balance point, and have that mean a point at which
we can separate core functionality from peripheral. For example, rt
class has push/pull at schedule time, fair class select_idle_sibling()
at wakeup, both in the fastpath, to minimize latency. It is all load
balancing, is push pull, fastpath does exactly the same things as slow
path, for the exact same reason, only resource investment varies.
I don't think you can separate the scheduler into two halves like this,
load balancing is an integral part and fundamental consequence of being
a multi-queue scheduler. Scheduling and balancing are not two halves
that make a whole, and can thus be separated, they are one.
-Mike
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists