[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160920115458.GX5016@twins.programming.kicks-ass.net>
Date: Tue, 20 Sep 2016 13:54:58 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Yuyang Du <yuyang.du@...el.com>,
Morten Rasmussen <Morten.Rasmussen@....com>,
Linaro Kernel Mailman List <linaro-kernel@...ts.linaro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Paul Turner <pjt@...gle.com>,
Benjamin Segall <bsegall@...gle.com>
Subject: Re: [PATCH 7/7 v3] sched: fix wrong utilization accounting when
switching to fair class
On Fri, Sep 16, 2016 at 04:23:16PM +0200, Vincent Guittot wrote:
> On 16 September 2016 at 14:16, Peter Zijlstra <peterz@...radead.org> wrote:
> >> > Also, the normalize comment in dequeue_entity() worries me, 'someone'
> >> > didn't update that when he moved update_min_vruntime() around.
> >
> > I now worry more, so we do:
> >
> > dequeue_task := dequeue_task_fair (p == current)
> > dequeue_entity
> > update_curr()
> > update_min_vruntime()
> > vruntime -= min_vruntime
> > update_min_vruntime()
> > // use cfs_rq->curr, which we just normalized !
>
> yes but does it really change the cfs_rq->min_vruntime in this case ?
So let me see; it does:
vruntime = cfs_rq->min_vruntime;
if (curr) // true
vruntime = curr->vruntime; // == vruntime - min_vruntime
if (leftmost) // possible
if (curr) // true
vruntime = min_vruntime(vruntime, se->vruntime);
if (se->vruntime - (curr->vruntime - min_vruntime)) < 0 // false
min_vruntime = max_vruntime(min_vruntime, vruntime);
if ((curr->vruntime - min_vruntime) - min_vruntime) > 0)
The problem is that double subtraction of min_vruntime can wrap.
The thing is, min_vruntime is the 0-point in our modular space, it
normalizes vruntime (ideally min_vruntime would be our 0-lag point,
resulting in vruntime - min_vruntime being the lag).
The moment min_vruntime grows past S64_MAX/2 -2*min_vruntime wraps into
positive space again and the test above becomes true and we'll select
the normalized @curr vruntime as new min_vruntime and weird stuff will
happen.
Also, even it things magically worked out, its still very icky to mix
the normalized vruntime into things.
> > put_prev_task := put_prev_task_fair
> > put_prev_entity
> > cfs_rq->curr = NULL;
> >
> >
> > Now the point of the latter update_min_vruntime() is to advance
> > min_vruntime when the task we removed was the one holding it back.
> >
> > However, it means that if we do dequeue+enqueue, we're further in the
> > future (ie. we get penalized).
> >
> > So I'm inclined to simply remove the (2nd) update_min_vruntime() call.
> > But as said above, my brain isn't co-operating much today.
OK, so not sure we can actually remove it, we do want it to move
min_vruntime forward (sometimes). We just don't want it to do so when
DEQUEUE_SAVE -- we want to get back where we left off, nor do we want to
muck about with touching normalized values.
Another fun corner case is DEQUEUE_SLEEP; in that case we do not
normalize, but we still want advance min_vruntime if this was the one
holding it back.
I ended up with the below, but I'm not sure I like it much. Let me prod
a wee bit more to see if there's not something else we can do.
Google has this patch-set replacing min_vruntime with an actual global
0-lag, which greatly simplifies things. If only they'd post it sometime
:/ /me prods pjt and ben with a sharp stick :-)
---
kernel/sched/fair.c | 22 ++++++++++++++++++----
1 file changed, 18 insertions(+), 4 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 986c10c25176..77566a340cbf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -462,17 +462,23 @@ static inline int entity_before(struct sched_entity *a,
static void update_min_vruntime(struct cfs_rq *cfs_rq)
{
+ struct sched_entity *curr = cfs_rq->curr;
+
u64 vruntime = cfs_rq->min_vruntime;
- if (cfs_rq->curr)
- vruntime = cfs_rq->curr->vruntime;
+ if (curr) {
+ if (curr->on_rq)
+ vruntime = curr->vruntime;
+ else
+ curr = NULL;
+ }
if (cfs_rq->rb_leftmost) {
struct sched_entity *se = rb_entry(cfs_rq->rb_leftmost,
struct sched_entity,
run_node);
- if (!cfs_rq->curr)
+ if (!curr)
vruntime = se->vruntime;
else
vruntime = min_vruntime(vruntime, se->vruntime);
@@ -3483,8 +3489,16 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
/* return excess runtime on last dequeue */
return_cfs_rq_runtime(cfs_rq);
- update_min_vruntime(cfs_rq);
update_cfs_shares(cfs_rq);
+
+ /*
+ * Now advance min_vruntime if @se was the entity holding it back,
+ * except when: DEQUEUE_SAVE && !DEQUEUE_MOVE, in this case we'll be
+ * put back on, and if we advance min_vruntime, we'll be placed back
+ * further than we started -- ie. we'll be penalized.
+ */
+ if ((flags & (DEQUEUE_SAVE | DEQUEUE_MOVE)) == DEQUEUE_SAVE)
+ update_min_vruntime(cfs_rq);
}
/*
Powered by blists - more mailing lists