lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <518B5EC9.1030605@intel.com>
Date:	Thu, 09 May 2013 16:31:05 +0800
From:	Alex Shi <alex.shi@...el.com>
To:	Paul Turner <pjt@...gle.com>
CC:	Ingo Molnar <mingo@...hat.com>,
	Peter Zijlstra <peterz@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Borislav Petkov <bp@...en8.de>,
	Namhyung Kim <namhyung@...nel.org>,
	Mike Galbraith <efault@....de>,
	Morten Rasmussen <morten.rasmussen@....com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Preeti U Murthy <preeti@...ux.vnet.ibm.com>,
	Viresh Kumar <viresh.kumar@...aro.org>,
	LKML <linux-kernel@...r.kernel.org>,
	Mel Gorman <mgorman@...e.de>, Rik van Riel <riel@...hat.com>,
	Michael Wang <wangyun@...ux.vnet.ibm.com>
Subject: Re: [PATCH v5 3/7] sched: set initial value of runnable avg for new
 forked task


> 
> Here is the patch according to Paul's opinions. 
> just refer the __update_task_entity_contrib in sched.h looks ugly.
> comments are appreciated!

Paul,

With sched_slice, we need to set the runnable avg sum/period after new
task assigned to a specific CPU.
So, set them __sched_fork is meaningless. and then
__update_task_entity_contrib(&p->se) also no reason to use. I am going
to pick up the old patch and drop this one. that also avoid to declare
it in sched.h.
What's your comment of this?

Regards!
> 
> ---
> From 647404447c996507b6a94110ed13fd122e4ee154 Mon Sep 17 00:00:00 2001
> From: Alex Shi <alex.shi@...el.com>
> Date: Mon, 3 Dec 2012 17:30:39 +0800
> Subject: [PATCH 3/7] sched: set initial value of runnable avg for new forked
>  task
> 
> We need initialize the se.avg.{decay_count, load_avg_contrib} for a
> new forked task.
> Otherwise random values of above variables cause mess when do new task
> enqueue:
>     enqueue_task_fair
>         enqueue_entity
>             enqueue_entity_load_avg
> 
> and make forking balancing imbalance since incorrect load_avg_contrib.
> 
> set avg.decay_count = 0, and give initial value of runnable_avg_sum/period
> to resolve such issues.
> 
> Thanks for Pual's suggestions
> 
> Signed-off-by: Alex Shi <alex.shi@...el.com>
> ---
>  kernel/sched/core.c  | 8 +++++++-
>  kernel/sched/fair.c  | 4 ++++
>  kernel/sched/sched.h | 1 +
>  3 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index c8db984..4e78de1 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -1566,6 +1566,11 @@ static void __sched_fork(struct task_struct *p)
>  #ifdef CONFIG_SMP
>  	p->se.avg.runnable_avg_period = 0;
>  	p->se.avg.runnable_avg_sum = 0;
> +	p->se.avg.decay_count = 0;
> +	/* New forked task assumed with full utilization */
> +	p->se.avg.runnable_avg_period = 1024;
> +	p->se.avg.runnable_avg_sum = 1024;
> +	__update_task_entity_contrib(&p->se);
>  #endif
>  #ifdef CONFIG_SCHEDSTATS
>  	memset(&p->se.statistics, 0, sizeof(p->se.statistics));
> @@ -1619,7 +1624,6 @@ void sched_fork(struct task_struct *p)
>  	unsigned long flags;
>  	int cpu = get_cpu();
>  
> -	__sched_fork(p);
>  	/*
>  	 * We mark the process as running here. This guarantees that
>  	 * nobody will actually run it, and a signal or other external
> @@ -1653,6 +1657,8 @@ void sched_fork(struct task_struct *p)
>  		p->sched_reset_on_fork = 0;
>  	}
>  
> +	__sched_fork(p);
> +
>  	if (!rt_prio(p->prio))
>  		p->sched_class = &fair_sched_class;
>  
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9c2f726..2881d42 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -1508,6 +1508,10 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
>  	 * We track migrations using entity decay_count <= 0, on a wake-up
>  	 * migration we use a negative decay count to track the remote decays
>  	 * accumulated while sleeping.
> +	 *
> +	 * When enqueue a new forked task, the se->avg.decay_count == 0, so
> +	 * we bypass update_entity_load_avg(), use avg.load_avg_contrib initial
> +	 * value: se->load.weight.
>  	 */
>  	if (unlikely(se->avg.decay_count <= 0)) {
>  		se->avg.last_runnable_update = rq_of(cfs_rq)->clock_task;
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index c6634f1..ec4cb9b 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -876,6 +876,7 @@ extern const struct sched_class idle_sched_class;
>  extern void trigger_load_balance(struct rq *rq, int cpu);
>  extern void idle_balance(int this_cpu, struct rq *this_rq);
>  
> +extern inline void __update_task_entity_contrib(struct sched_entity *se);
>  #else	/* CONFIG_SMP */
>  
>  static inline void idle_balance(int cpu, struct rq *rq)
> 


-- 
Thanks
    Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ