[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20190603093835.GF3436@hirez.programming.kicks-ass.net>
Date: Mon, 3 Jun 2019 11:38:35 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Valentin Schneider <valentin.schneider@....com>
Cc: linux-kernel@...r.kernel.org, mingo@...nel.org,
vincent.guittot@...aro.org, Qian Cai <cai@....pw>
Subject: Re: [PATCH] sched/fair: Cleanup definition of NOHZ blocked load
functions
On Sun, Jun 02, 2019 at 05:41:10PM +0100, Valentin Schneider wrote:
> cfs_rq_has_blocked() and others_have_blocked() are only used within
> update_blocked_averages(). The !CONFIG_FAIR_GROUP_SCHED version of the
> latter calls them within a #define CONFIG_NO_HZ_COMMON block, whereas
> the CONFIG_FAIR_GROUP_SCHED one calls them unconditionnally.
>
> As reported by Qian, the above leads to this warning in
> !CONFIG_NO_HZ_COMMON configs:
>
> kernel/sched/fair.c: In function 'update_blocked_averages':
> kernel/sched/fair.c:7750:7: warning: variable 'done' set but not used
> [-Wunused-but-set-variable]
>
> It wouldn't be wrong to keep cfs_rq_has_blocked() and
> others_have_blocked() as they are, but since their only current use is
> to figure out when we can stop calling update_blocked_averages() on
> fully decayed NOHZ idle CPUs, we can give them a new definition for
> !CONFIG_NO_HZ_COMMON.
>
> Change the definition of cfs_rq_has_blocked() and
> others_have_blocked() for !CONFIG_NO_HZ_COMMON so that the
> NOHZ-specific blocks of update_blocked_averages() become no-ops and
> the 'done' variable gets optimised out.
>
> No change in functionality intended.
>
> Reported-by: Qian Cai <cai@....pw>
> Signed-off-by: Valentin Schneider <valentin.schneider@....com>
I'm thinking the below can go on top to further clean up?
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7722,9 +7722,18 @@ static inline bool others_have_blocked(s
return false;
}
+
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
+{
+ rq->last_blocked_load_update_tick = jiffies;
+
+ if (!has_blocked)
+ rq->has_blocked_load = 0;
+}
#else
static inline bool cfs_rq_has_blocked(struct cfs_rq *cfs_rq) { return false; }
static inline bool others_have_blocked(struct rq *rq) { return false; }
+static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
#endif
#ifdef CONFIG_FAIR_GROUP_SCHED
@@ -7746,18 +7755,6 @@ static inline bool cfs_rq_is_decayed(str
return true;
}
-#ifdef CONFIG_NO_HZ_COMMON
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked)
-{
- rq->last_blocked_load_update_tick = jiffies;
-
- if (!has_blocked)
- rq->has_blocked_load = 0;
-}
-#else
-static inline void update_blocked_load_status(struct rq *rq, bool has_blocked) {}
-#endif
-
static void update_blocked_averages(int cpu)
{
struct rq *rq = cpu_rq(cpu);
@@ -7870,11 +7867,7 @@ static inline void update_blocked_averag
update_rt_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &rt_sched_class);
update_dl_rq_load_avg(rq_clock_pelt(rq), rq, curr_class == &dl_sched_class);
update_irq_load_avg(rq, 0);
-#ifdef CONFIG_NO_HZ_COMMON
- rq->last_blocked_load_update_tick = jiffies;
- if (!cfs_rq_has_blocked(cfs_rq) && !others_have_blocked(rq))
- rq->has_blocked_load = 0;
-#endif
+ update_blocked_load_status(rq, cfs_rq_has_blocked(cfs_rq) || others_have_blocked(rq));
rq_unlock_irqrestore(rq, &rf);
}
Powered by blists - more mailing lists