lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <8025988d-7d1d-0b4a-6eed-8b3698bc9bad@arm.com>
Date:   Thu, 14 Jul 2022 14:30:34 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Chengming Zhou <zhouchengming@...edance.com>, mingo@...hat.com,
        peterz@...radead.org, vincent.guittot@...aro.org,
        rostedt@...dmis.org, bsegall@...gle.com, vschneid@...hat.com
Cc:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2 03/10] sched/fair: maintain task se depth in
 set_task_rq()

On 13/07/2022 06:04, Chengming Zhou wrote:
> Previously we only maintain task se depth in task_move_group_fair(),
> if a !fair task change task group, its se depth will not be updated,
> so commit eb7a59b2c888 ("sched/fair: Reset se-depth when task switched to FAIR")
> fix the problem by updating se depth in switched_to_fair() too.

Maybe it's worth mentioning how the se.depth setting from
task_move_group_fair() and switched_to_fair() went into
attach_task_cfs_rq() with commit daa59407b558 ("sched/fair: Unify
switched_{from,to}_fair() and task_move_group_fair()"}  and further into
attach_entity_cfs_rq() with commit df217913e72e ("sched/fair: Factorize
attach/detach entity").

> This patch move task se depth maintainence to set_task_rq(), which will be
> called when CPU/cgroup change, so its depth will always be correct.
> 
> This patch is preparation for the next patch.
> 
> Signed-off-by: Chengming Zhou <zhouchengming@...edance.com>

Reviewed-by: Dietmar Eggemann <dietmar.eggemann@....com>

> ---
>  kernel/sched/fair.c  | 8 --------
>  kernel/sched/sched.h | 1 +
>  2 files changed, 1 insertion(+), 8 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2a3e12ead144..bf595b622656 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -11539,14 +11539,6 @@ static void attach_entity_cfs_rq(struct sched_entity *se)
>  {
>  	struct cfs_rq *cfs_rq = cfs_rq_of(se);
>  
> -#ifdef CONFIG_FAIR_GROUP_SCHED
> -	/*
> -	 * Since the real-depth could have been changed (only FAIR
> -	 * class maintain depth value), reset depth properly.
> -	 */
> -	se->depth = se->parent ? se->parent->depth + 1 : 0;
> -#endif
> -
>  	/* Synchronize entity with its cfs_rq */
>  	update_load_avg(cfs_rq, se, sched_feat(ATTACH_AGE_LOAD) ? 0 : SKIP_AGE_LOAD);
>  	attach_entity_load_avg(cfs_rq, se);
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index aad7f5ee9666..8cc3eb7b86cd 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1940,6 +1940,7 @@ static inline void set_task_rq(struct task_struct *p, unsigned int cpu)
>  	set_task_rq_fair(&p->se, p->se.cfs_rq, tg->cfs_rq[cpu]);
>  	p->se.cfs_rq = tg->cfs_rq[cpu];
>  	p->se.parent = tg->se[cpu];
> +	p->se.depth = tg->se[cpu] ? tg->se[cpu]->depth + 1 : 0;
>  #endif
>  
>  #ifdef CONFIG_RT_GROUP_SCHED

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ