lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <51244E9D.6050709@linux.vnet.ibm.com>
Date:	Wed, 20 Feb 2013 09:48:37 +0530
From:	Preeti U Murthy <preeti@...ux.vnet.ibm.com>
To:	Paul Turner <pjt@...gle.com>
CC:	Alex Shi <alex.shi@...el.com>,
	Peter Zijlstra <peterz@...radead.org>,
	torvalds@...ux-foundation.org, mingo@...hat.com,
	tglx@...utronix.de, akpm@...ux-foundation.org,
	arjan@...ux.intel.com, bp@...en8.de, namhyung@...nel.org,
	efault@....de, vincent.guittot@...aro.org,
	gregkh@...uxfoundation.org, viresh.kumar@...aro.org,
	linux-kernel@...r.kernel.org
Subject: Re: [patch v4 07/18] sched: set initial load avg of new forked task

Hi everyone,

On 02/19/2013 05:04 PM, Paul Turner wrote:
> On Fri, Feb 15, 2013 at 2:07 AM, Alex Shi <alex.shi@...el.com> wrote:
>>
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index 1dff78a..9d1c193 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -1557,8 +1557,8 @@ static void __sched_fork(struct task_struct *p)
>>>   * load-balance).
>>>   */
>>>  #if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED)
>>> -       p->se.avg.runnable_avg_period = 0;
>>> -       p->se.avg.runnable_avg_sum = 0;
>>> +       p->se.avg.runnable_avg_period = 1024;
>>> +       p->se.avg.runnable_avg_sum = 1024;
>>
>> It can't work.
>> avg.decay_count needs to be set to 0 before enqueue_entity_load_avg(), then
>> update_entity_load_avg() can't be called, so, runnable_avg_period/sum
>> are unusable.
> 
> Well we _could_ also use a negative decay_count here and treat it like
> a migration; but the larger problem is the visibility of p->on_rq;
> which is gates whether we account the time as runnable and occurs
> after activate_task() so that's out.
> 
>>
>> Even we has chance to call __update_entity_runnable_avg(),
>> avg.last_runnable_update needs be set before that, usually, it needs to
>> be set as 'now', that cause __update_entity_runnable_avg() function
>> return 0, then update_entity_load_avg() still can not reach to
>> __update_entity_load_avg_contrib().
>>
>> If we embed a simple new task load initialization to many functions,
>> that is too hard for future reader.
> 
> This is my concern about making this a special case with the
> introduction ENQUEUE_NEWTASK flag; enqueue jumps through enough hoops
> as it is.
> 
> I still don't see why we can't resolve this at init time in
> __sched_fork(); your patch above just moves an explicit initialization
> of load_avg_contrib into the enqueue path.  Adding a call to
> __update_task_entity_contrib() to the previous alternate suggestion
> would similarly seem to resolve this?

We could do this(Adding a call to __update_task_entity_contrib()),but the
cfs_rq->runnable_load_avg gets updated only if the task is on the runqueue.
But in the forked task's case the on_rq flag is not yet set.Something like
the below:

---
 kernel/sched/fair.c |   18 +++++++++---------
 1 file changed, 9 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8691b0d..841e156 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1451,14 +1451,20 @@ static inline void update_entity_load_avg(struct sched_entity *se,
 	else
 		now = cfs_rq_clock_task(group_cfs_rq(se));
 
-	if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq))
-		return;
-
+	if (!__update_entity_runnable_avg(now, &se->avg, se->on_rq)) {
+		if (!(flags & ENQUEUE_NEWTASK))
+			return;
+	}
 	contrib_delta = __update_entity_load_avg_contrib(se);
 
 	if (!update_cfs_rq)
 		return;
 
+	/* But the cfs_rq->runnable_load_avg does not get updated in case of
+	 * a forked task,because the se->on_rq = 0,although we update the
+	 * task's load_avg_contrib above in
+	 * __update_entity_laod_avg_contrib().
+	 */
 	if (se->on_rq)
 		cfs_rq->runnable_load_avg += contrib_delta;
 	else
@@ -1538,12 +1544,6 @@ static inline void enqueue_entity_load_avg(struct cfs_rq *cfs_rq,
 		subtract_blocked_load_contrib(cfs_rq, se->avg.load_avg_contrib);
 		update_entity_load_avg(se, 0);
 	}
-	/*
-	 * set the initial load avg of new task same as its load
-	 * in order to avoid brust fork make few cpu too heavier
-	 */
-	if (flags & ENQUEUE_NEWTASK)
-		se->avg.load_avg_contrib = se->load.weight;
 
 	cfs_rq->runnable_load_avg += se->avg.load_avg_contrib;
 	/* we force update consideration on load-balancer moves */

Thanks

Regards
Preeti U Murthy

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ