lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <b74c3ab0-3509-8483-170b-852b079c4c60@bytedance.com>
Date:   Tue, 12 Jul 2022 00:54:46 +0800
From:   Chengming Zhou <zhouchengming@...edance.com>
To:     mingo@...hat.com, peterz@...radead.org, vincent.guittot@...aro.org,
        dietmar.eggemann@....com, rostedt@...dmis.org, bsegall@...gle.com,
        vschneid@...hat.com
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH 8/8] sched/fair: delete superfluous SKIP_AGE_LOAD

On 2022/7/9 23:13, Chengming Zhou wrote:
> All three attach_entity_cfs_rq() types:
> 
> 1. task migrate CPU
> 2. task migrate cgroup
> 3. task switched to fair
> 
> have its sched_avg last_update_time reset to 0 when
> attach_entity_cfs_rq() -> update_load_avg(), so it makes
> no difference whether SKIP_AGE_LOAD is set or not.
> 
> This patch delete the superfluous SKIP_AGE_LOAD, and the unused
> feature ATTACH_AGE_LOAD together.
> 
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>
> ---
>  kernel/sched/fair.c     | 18 ++++++------------
>  kernel/sched/features.h |  1 -
>  2 files changed, 6 insertions(+), 13 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b0bde895ba96..b91643a2143e 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -3956,9 +3956,8 @@ static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>   * Optional action to be done while updating the load average
>   */
>  #define UPDATE_TG	0x1
> -#define SKIP_AGE_LOAD	0x2
> -#define DO_ATTACH	0x4
> -#define DO_DETACH	0x8
> +#define DO_ATTACH	0x2
> +#define DO_DETACH	0x4
>  
>  /* Update task and its cfs_rq load average */
>  static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> @@ -3970,7 +3969,7 @@ static inline void update_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
>  	 * Track task load average for carrying it to new CPU after migrated, and
>  	 * track group sched_entity load average for task_h_load calc in migration
>  	 */
> -	if (se->avg.last_update_time && !(flags & SKIP_AGE_LOAD))
> +	if (se->avg.last_update_time)
>  		__update_load_avg_se(now, cfs_rq, se);
>  
>  	decayed  = update_cfs_rq_load_avg(now, cfs_rq);
> @@ -4253,7 +4252,6 @@ static inline bool cfs_rq_is_decayed(struct cfs_rq *cfs_rq)
>  }
>  
>  #define UPDATE_TG	0x0
> -#define SKIP_AGE_LOAD	0x0
>  #define DO_ATTACH	0x0
>  #define DO_DETACH	0x0
>  
> @@ -11484,9 +11482,7 @@ static void detach_entity_cfs_rq(struct sched_entity *se)
>  	struct cfs_rq *cfs_rq = cfs_rq_of(se);
>  
>  	/* Catch up with the cfs_rq and remove our load when we leave */
> -	update_load_avg(cfs_rq, se, 0);
> -	detach_entity_load_avg(cfs_rq, se);
> -	update_tg_load_avg(cfs_rq);
> +	update_load_avg(cfs_rq, se, UPDATE_TG | DO_DETACH);
>  	propagate_entity_cfs_rq(se);
>  }
>  
> @@ -11494,10 +11490,8 @@ static void attach_entity_cfs_rq(struct sched_entity *se)
>  {
>  	struct cfs_rq *cfs_rq = cfs_rq_of(se);
>  
> -	/* Synchronize entity with its cfs_rq */
> -	update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD);
> -	attach_entity_load_avg(cfs_rq, se);
> -	update_tg_load_avg(cfs_rq);
> +	/* Synchronize entity with its cfs_rq and attach our load */
> +	update_load_avg(cfs_rq, se, UPDATE_TG | DO_ATTACH);
>  	propagate_entity_cfs_rq(se);
>  }

Looks like I order this change wrong, this change should be put before patch5-6.

Because this change will check last_update_time before attach_entity_load_avg()
in update_load_avg(), instead of unconditional attach_entity_load_avg() here.


Problem case: switch to fair class

p->sched_class = fair_class;  --> p.se->avg.last_update_time = 0
if (queued)
  enqueue_task(p);
    ...
      enqueue_entity()
        update_load_avg(UPDATE_TG | DO_ATTACH)
          if (!se->avg.last_update_time && (flags & DO_ATTACH))  --> true
            attach_entity_load_avg()  --> attached, will set last_update_time
check_class_changed()
  switched_from() (!fair)
  switched_to()   (fair)
    switched_to_fair()
      attach_entity_load_avg()  --> unconditional attach again!


If we use unconditional attach_entity_load_avg() in switched_to_fair(), the
above twice attach problem will happen, see details in commit 7dc603c9028e

In this patch, we also use update_load_avg(UPDATE_TG | DO_ATTACH) in
switched_to_fair(), so no twice attach problem will happen since we check
last_update_time in update_load_avg().

Thanks.

>  
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index ee7f23c76bd3..fb92431d496f 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -85,7 +85,6 @@ SCHED_FEAT(RT_PUSH_IPI, true)
>  
>  SCHED_FEAT(RT_RUNTIME_SHARE, false)
>  SCHED_FEAT(LB_MIN, false)
> -SCHED_FEAT(ATTACH_AGE_LOAD, true)
>  
>  SCHED_FEAT(WA_IDLE, true)
>  SCHED_FEAT(WA_WEIGHT, true)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ