lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4e15ad55-beeb-e860-0420-8f439d076758@arm.com>
Date:   Mon, 17 Oct 2016 12:49:55 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Vincent Guittot <vincent.guittot@...aro.org>,
        Joseph Salisbury <joseph.salisbury@...onical.com>
Cc:     Ingo Molnar <mingo@...nel.org>,
        Peter Zijlstra <peterz@...radead.org>,
        Linus Torvalds <torvalds@...ux-foundation.org>,
        Thomas Gleixner <tglx@...utronix.de>,
        LKML <linux-kernel@...r.kernel.org>,
        Mike Galbraith <efault@....de>, omer.akram@...onical.com
Subject: Re: [v4.8-rc1 Regression] sched/fair: Apply more PELT fixes

Hi Vincent,

On 17/10/16 10:09, Vincent Guittot wrote:
> Le Friday 14 Oct 2016 à 12:04:02 (-0400), Joseph Salisbury a écrit :
>> On 10/14/2016 11:18 AM, Vincent Guittot wrote:
>>> Le Friday 14 Oct 2016 à 14:10:07 (+0100), Dietmar Eggemann a écrit :
>>>> On 14/10/16 09:24, Vincent Guittot wrote:

[...]

> Could you try the patch below on top of the faulty kernel ?
> 
> ---
>  kernel/sched/fair.c | 5 +++--
>  1 file changed, 3 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 8b03fb5..8926685 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -2902,7 +2902,8 @@ __update_load_avg(u64 now, int cpu, struct sched_avg *sa,
>   */
>  static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
>  {
> -	long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
> +	unsigned long load_avg = READ_ONCE(cfs_rq->avg.load_avg);
> +	long delta = load_avg - cfs_rq->tg_load_avg_contrib;
>  
>  	/*
>  	 * No need to update load_avg for root_task_group as it is not used.
> @@ -2912,7 +2913,7 @@ static inline void update_tg_load_avg(struct cfs_rq *cfs_rq, int force)
>  
>  	if (force || abs(delta) > cfs_rq->tg_load_avg_contrib / 64) {
>  		atomic_long_add(delta, &cfs_rq->tg->load_avg);
> -		cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
> +		cfs_rq->tg_load_avg_contrib = load_avg;
>  	}
>  }

Tested it on an Ubuntu 16.10 Server (on top of the default 4.8.0-22-generic
kernel) on a Lenovo T430 and it didn't help.

What seems to cure it is to get rid of this snippet (part of the commit
mentioned earlier in this thread: 3d30544f0212):

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 039de34f1521..16c692049fbf 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -726,7 +726,6 @@ void post_init_entity_util_avg(struct sched_entity *se)
        struct sched_avg *sa = &se->avg;
        long cap = (long)(SCHED_CAPACITY_SCALE - cfs_rq->avg.util_avg) / 2;
        u64 now = cfs_rq_clock_task(cfs_rq);
-       int tg_update;
 
        if (cap > 0) {
                if (cfs_rq->avg.util_avg != 0) {
@@ -759,10 +758,8 @@ void post_init_entity_util_avg(struct sched_entity *se)
                }
        }
 
-       tg_update = update_cfs_rq_load_avg(now, cfs_rq, false);
+       update_cfs_rq_load_avg(now, cfs_rq, false);
        attach_entity_load_avg(cfs_rq, se);
-       if (tg_update)
-               update_tg_load_avg(cfs_rq, false);
 }
 
 #else /* !CONFIG_SMP */

BTW, I guess we can reach .tg_load_avg up to ~300000-400000 on such a system
initially because systemd will create all ~100 services (and therefore the
corresponding 2. level tg's) at once. In my previous example, there was 500ms
between the creation of 2 tg's so there was a lot of decaying going on in between.





















Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ