[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <57506C8B.4050407@arm.com>
Date: Thu, 2 Jun 2016 18:27:39 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Juri Lelli <juri.lelli@....com>
Cc: Peter Zijlstra <peterz@...radead.org>,
linux-kernel@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
Ben Segall <bsegall@...gle.com>,
Morten Rasmussen <morten.rasmussen@....com>,
Yuyang Du <yuyang.du@...el.com>
Subject: Re: [RFC PATCH 3/3] sched/fair: Change @running of
__update_load_avg() to @update_util
On 02/06/16 10:25, Juri Lelli wrote:
[...]
>> @@ -2757,7 +2754,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
>> weight * scaled_delta_w;
>> }
>> }
>> - if (update_util && running)
>> + if (update_util == 0x3)
>
> How about a define for these masks?
Something like this?
+#define UTIL_RUNNING 1
+#define UTIL_UPDATE 2
+
/*
* We can represent the historical contribution to runnable average as the
* coefficients of a geometric series. To do this we sub-divide our runnable
@@ -2724,7 +2727,7 @@ static u32 __compute_runnable_contrib(u64 n)
*/
static __always_inline int
__update_load_avg(u64 now, int cpu, struct sched_avg *sa,
- unsigned long weight, int update_util, struct cfs_rq *cfs_rq)
+ unsigned long weight, int util_flags, struct cfs_rq *cfs_rq)
{
u64 delta, scaled_delta, periods;
u32 contrib;
@@ -2775,7 +2778,7 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
weight * scaled_delta_w;
}
}
- if (update_util == 0x3)
+ if (util_flags == (UTIL_UPDATE | UTIL_RUNNING))
sa->util_sum += scaled_delta_w * scale_cpu;
...
Powered by blists - more mailing lists