[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160408104821.GM3448@twins.programming.kicks-ass.net>
Date: Fri, 8 Apr 2016 12:48:21 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: Frederic Weisbecker <fweisbec@...il.com>
Cc: LKML <linux-kernel@...r.kernel.org>,
Byungchul Park <byungchul.park@....com>,
Chris Metcalf <cmetcalf@...hip.com>,
Thomas Gleixner <tglx@...utronix.de>,
Luiz Capitulino <lcapitulino@...hat.com>,
Christoph Lameter <cl@...ux.com>,
"Paul E . McKenney" <paulmck@...ux.vnet.ibm.com>,
Mike Galbraith <efault@....de>, Rik van Riel <riel@...hat.com>,
Ingo Molnar <mingo@...e.hu>
Subject: Re: [PATCH 3/3] sched: Optimize !CONFIG_NO_HZ_COMMON cpu load updates
On Fri, Apr 08, 2016 at 03:07:13AM +0200, Frederic Weisbecker wrote:
> index 4c522a7..59a2821 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7327,8 +7327,9 @@ void __init sched_init(void)
>
> for (j = 0; j < CPU_LOAD_IDX_MAX; j++)
> rq->cpu_load[j] = 0;
> -
> +#ifdef CONFIG_NO_HZ_COMMON
> rq->last_load_update_tick = jiffies;
> +#endif
>
> #ifdef CONFIG_SMP
> rq->sd = NULL;
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1dd864d..4618e5b 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4661,8 +4680,10 @@ static inline void cpu_load_update_nohz(struct rq *this_rq,
>
> static void cpu_load_update_periodic(struct rq *this_rq, unsigned long load)
> {
> +#ifdef CONFIG_NO_HZ_COMMON
> /* See the mess around cpu_load_update_nohz(). */
> this_rq->last_load_update_tick = READ_ONCE(jiffies);
> +#endif
> cpu_load_update(this_rq, load, 1);
> }
>
Here you do the simple #ifdef, while here you make a giant mess instead
of the relatively straight forward:
> @@ -4540,17 +4568,8 @@ static void cpu_load_update(struct rq *this_rq, unsigned long this_load,
>
> /* scale is effectively 1 << i now, and >> i divides by scale */
>
> - old_load = this_rq->cpu_load[i];
#ifdef CONFIG_NO_HZ_COMMON
> - old_load = decay_load_missed(old_load, pending_updates - 1, i);
> - if (tickless_load) {
> - old_load -= decay_load_missed(tickless_load, pending_updates - 1, i);
> - /*
> - * old_load can never be a negative value because a
> - * decayed tickless_load cannot be greater than the
> - * original tickless_load.
> - */
> - old_load += tickless_load;
> - }
#endif
> new_load = this_load;
> /*
> * Round up the averaging division if load is increasing. This
Powered by blists - more mailing lists