lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20150813090302.GQ3956@byungchulpark-X58A-UD3R>
Date:	Thu, 13 Aug 2015 18:03:02 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	mingo@...nel.org, peterz@...radead.org
Cc:	linux-kernel@...r.kernel.org, yuyang.du@...el.com
Subject: Re: [PATCH v4] sched: sync with the prev cfs when changing cgroup
 within a cpu

On Thu, Aug 13, 2015 at 09:26:13AM +0900, byungchul.park@....com wrote:
> From: Byungchul Park <byungchul.park@....com>

please ignore this thread.

i will resend this patch in the form of another patch package.

thanks,
byungchul

> 
> change from v3 to v4
> * adjust cfs load in "queued" case, too
> 
> change from v2 to v3
> * rebase to tip git
> 
> change from v1 to v2
> * use #ifdef CONFIG_SMP to load tracking code
> * make commit message compact which made confused
> 
> ----->8-----
> >From 1d5bcc21cece51eca250986846ed9b01a174bd54 Mon Sep 17 00:00:00 2001
> From: Byungchul Park <byungchul.park@....com>
> Date: Thu, 13 Aug 2015 09:18:07 +0900
> Subject: [PATCH v4] sched: sync with the prev cfs when changing cgroup within
>  a cpu
> 
> current code seems to be wrong with cfs_rq's avg loads when changing
> a task's cgroup(=cfs_rq) to another. i tested with "echo pid > cgroup" and
> found that e.g. cfs_rq->avg.load_avg became larger and larger whenever i
> changed a cgroup to another again and again.
> 
> we have to sync se's average load with both *prev* cfs_rq and next cfs_rq
> when changing its group.
> 
> Signed-off-by: Byungchul Park <byungchul.park@....com>
> ---
>  kernel/sched/fair.c |   34 ++++++++++++++++++++++++----------
>  1 file changed, 24 insertions(+), 10 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 2a33d7b..979ca2c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8017,23 +8017,37 @@ static void task_move_group_fair(struct task_struct *p, int queued)
>  	if (!queued && (!se->sum_exec_runtime || p->state == TASK_WAKING))
>  		queued = 1;
>  
> +	cfs_rq = cfs_rq_of(se);
>  	if (!queued)
> -		se->vruntime -= cfs_rq_of(se)->min_vruntime;
> +		se->vruntime -= cfs_rq->min_vruntime;
> +
> +#ifdef CONFIG_SMP
> +	/* synchronize task with its prev cfs_rq */
> +	if (!queued)
> +		__update_load_avg(cfs_rq->avg.last_update_time, cpu_of(rq_of(cfs_rq)),
> +				&se->avg, se->on_rq * scale_load_down(se->load.weight),
> +				cfs_rq->curr == se, NULL);
> +
> +	/* remove our load when we leave */
> +	cfs_rq->avg.load_avg = max_t(long, cfs_rq->avg.load_avg - se->avg.load_avg, 0);
> +	cfs_rq->avg.load_sum = max_t(s64, cfs_rq->avg.load_sum - se->avg.load_sum, 0);
> +	cfs_rq->avg.util_avg = max_t(long, cfs_rq->avg.util_avg - se->avg.util_avg, 0);
> +	cfs_rq->avg.util_sum = max_t(s32, cfs_rq->avg.util_sum - se->avg.util_sum, 0);
> +#endif
>  	set_task_rq(p, task_cpu(p));
>  	se->depth = se->parent ? se->parent->depth + 1 : 0;
> -	if (!queued) {
> -		cfs_rq = cfs_rq_of(se);
> +	cfs_rq = cfs_rq_of(se);
> +	if (!queued)
>  		se->vruntime += cfs_rq->min_vruntime;
>  
>  #ifdef CONFIG_SMP
> -		/* Virtually synchronize task with its new cfs_rq */
> -		p->se.avg.last_update_time = cfs_rq->avg.last_update_time;
> -		cfs_rq->avg.load_avg += p->se.avg.load_avg;
> -		cfs_rq->avg.load_sum += p->se.avg.load_sum;
> -		cfs_rq->avg.util_avg += p->se.avg.util_avg;
> -		cfs_rq->avg.util_sum += p->se.avg.util_sum;
> +	/* Virtually synchronize task with its new cfs_rq */
> +	p->se.avg.last_update_time = cfs_rq->avg.last_update_time;
> +	cfs_rq->avg.load_avg += p->se.avg.load_avg;
> +	cfs_rq->avg.load_sum += p->se.avg.load_sum;
> +	cfs_rq->avg.util_avg += p->se.avg.util_avg;
> +	cfs_rq->avg.util_sum += p->se.avg.util_sum;
>  #endif
> -	}
>  }
>  
>  void free_fair_sched_group(struct task_group *tg)
> -- 
> 1.7.9.5
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ