[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160620092339.GA4526@vingu-laptop>
Date: Mon, 20 Jun 2016 11:23:39 +0200
From: Vincent Guittot <vincent.guittot@...aro.org>
To: Peter Zijlstra <peterz@...radead.org>
Cc: Yuyang Du <yuyang.du@...el.com>, Ingo Molnar <mingo@...nel.org>,
linux-kernel <linux-kernel@...r.kernel.org>,
Mike Galbraith <umgwanakikbuti@...il.com>,
Benjamin Segall <bsegall@...gle.com>,
Paul Turner <pjt@...gle.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Matt Fleming <matt@...eblueprint.co.uk>
Subject: Re: [PATCH 4/4] sched,fair: Fix PELT integrity for new tasks
Le Friday 17 Jun 2016 à 18:18:31 (+0200), Peter Zijlstra a écrit :
> On Fri, Jun 17, 2016 at 06:02:39PM +0200, Peter Zijlstra wrote:
> > So yes, ho-humm, how to go about doing that bestest. Lemme have a play.
>
> This is what I came up with, not entirely pretty, but I suppose it'll
> have to do.
>
> ---
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -724,6 +724,7 @@ void post_init_entity_util_avg(struct sc
> struct cfs_rq *cfs_rq = cfs_rq_of(se);
> struct sched_avg *sa = &se->avg;
> long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
> + u64 now = cfs_rq_clock_task(cfs_rq);
>
> if (cap > 0) {
> if (cfs_rq->avg.util_avg != 0) {
> @@ -738,7 +739,20 @@ void post_init_entity_util_avg(struct sc
> sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
> }
>
> - update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false);
> + if (entity_is_task(se)) {
Why only task ?
> + struct task_struct *p = task_of(se);
> + if (p->sched_class != &fair_sched_class) {
> + /*
> + * For !fair tasks do attach_entity_load_avg()
> + * followed by detach_entity_load_avg() as per
> + * switched_from_fair().
> + */
> + se->avg.last_update_time = now;
> + return;
> + }
> + }
> +
> + update_cfs_rq_load_avg(now, cfs_rq, false);
> attach_entity_load_avg(cfs_rq, se);
Don't we have to do a complete attach with attach_task_cfs_rq instead of just the load_avg ? to set also depth ?
What about something like below ?
---
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -723,6 +723,7 @@ void post_init_entity_util_avg(struct sched_entity *se)
struct cfs_rq *cfs_rq = cfs_rq_of(se);
struct sched_avg *sa = &se->avg;
long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
+ u64 now = cfs_rq_clock_task(cfs_rq);
if (cap > 0) {
if (cfs_rq->avg.util_avg != 0) {
@@ -737,8 +738,18 @@ void post_init_entity_util_avg(struct sched_entity *se)
sa->util_sum = sa->util_avg * LOAD_AVG_MAX;
}
- update_cfs_rq_load_avg(cfs_rq_clock_task(cfs_rq), cfs_rq, false);
- attach_entity_load_avg(cfs_rq, se);
+ if (p->sched_class == &fair_sched_class) {
+ /* fair entity must be attached to cfs_rq */
+ attach_task_cfs_rq(se);
+ } else {
+ /*
+ * For !fair tasks do attach_entity_load_avg()
+ * followed by detach_entity_load_avg() as per
+ * switched_from_fair().
+ */
+ se->avg.last_update_time = now;
+ }
+
}
static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
--
Powered by blists - more mailing lists