[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160309120403.GK6344@twins.programming.kicks-ass.net>
Date: Wed, 9 Mar 2016 13:04:03 +0100
From: Peter Zijlstra <peterz@...radead.org>
To: Pavan Kondeti <pkondeti@...eaurora.org>
Cc: linux-kernel@...r.kernel.org,
Mike Galbraith <umgwanakikbuti@...il.com>,
Ingo Molnar <mingo@...nel.org>, Paul Turner <pjt@...gle.com>,
Ben Segall <bsegall@...gle.com>,
Matt Fleming <matt@...eblueprint.co.uk>,
Morten Rasmussen <morten.rasmussen@....com>,
byungchul.park@....com
Subject: Re: Migrated CFS task getting an unfair advantage
On Wed, Mar 09, 2016 at 02:52:57PM +0530, Pavan Kondeti wrote:
> When a CFS task is enqueued during migration (load balance or change in
> affinity), its vruntime is normalized before updating the current and
> cfs_rq->min_vruntime.
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
/*
* Update the normalized vruntime before updating min_vruntime
* through calling update_curr().
*/
if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
se->vruntime += cfs_rq->min_vruntime;
update_curr(cfs_rq);
This, right? Some idiot wrote a comment but forgot to explain why.
> If the current entity is a low priority task or belongs to a cgroup
> that has lower cpu.shares and it is the only entity queued, there is a
> possibility of big update to the cfs_rq->min_vruntime.
> As the migrated task is normalized before this update, it gets an
> unfair advantage over tasks queued after this point. If the migrated
> task is a CPU hogger, the other CFS tasks queued on this CPU gets
> starved.
Because it takes a whole while for the newly placed task to gain on the
previous task, right?
> If we add the migrated task to destination CPU cfs_rq's rb tree before
> updating the current in enqueue_entity(), the cfs_rq->min_vruntime
> does not go beyond the newly migrated task. Is this an acceptable
> solution?
Hurm.. so I'm not sure how that would solve anything. The existing task
would still be shot far into the future.
What you want is to normalize after update_curr()... but we cannot do
that in the case cfs_rq->curr == se (which I suppose is what that
comment is on about).
Does something like the below work?
---
kernel/sched/fair.c | 20 ++++++++++++++------
1 file changed, 14 insertions(+), 6 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 33130529e9b5..3c114d971d84 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3157,17 +3157,25 @@ static inline void check_schedstat_required(void)
static void
enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
{
+ bool renorm = !(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING);
+ bool curr = cfs_rq->curr == se;
+
/*
- * Update the normalized vruntime before updating min_vruntime
- * through calling update_curr().
+ * If we're the current task, we must renormalise before calling
+ * update_curr().
*/
- if (!(flags & ENQUEUE_WAKEUP) || (flags & ENQUEUE_WAKING))
+ if (renorm && curr)
se->vruntime += cfs_rq->min_vruntime;
+ update_curr(cfs_rq);
+
/*
- * Update run-time statistics of the 'current'.
+ * Otherwise, renormalise after, such that we're placed at the current
+ * moment in time, instead of some random moment in the past.
*/
- update_curr(cfs_rq);
+ if (renorm && !curr)
+ se->vruntime += cfs_rq->min_vruntime;
+
enqueue_entity_load_avg(cfs_rq, se);
account_entity_enqueue(cfs_rq, se);
update_cfs_shares(cfs_rq);
@@ -3183,7 +3191,7 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
update_stats_enqueue(cfs_rq, se);
check_spread(cfs_rq, se);
}
- if (se != cfs_rq->curr)
+ if (!curr)
__enqueue_entity(cfs_rq, se);
se->on_rq = 1;
Powered by blists - more mailing lists