[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <CAPM31RJw7a-+QNRwwukg3h3bHCQs++cnT3DgCyHaJTZY7qDsXA@mail.gmail.com>
Date: Wed, 13 Feb 2013 07:41:32 -0800
From: Paul Turner <pjt@...gle.com>
To: Alex Shi <alex.shi@...el.com>
Cc: Peter Zijlstra <peterz@...radead.org>,
torvalds@...ux-foundation.org, mingo@...hat.com,
tglx@...utronix.de, akpm@...ux-foundation.org,
arjan@...ux.intel.com, bp@...en8.de, namhyung@...nel.org,
efault@....de, vincent.guittot@...aro.org,
gregkh@...uxfoundation.org, preeti@...ux.vnet.ibm.com,
viresh.kumar@...aro.org, linux-kernel@...r.kernel.org
Subject: Re: [patch v4 07/18] sched: set initial load avg of new forked task
On Wed, Feb 13, 2013 at 7:14 AM, Alex Shi <alex.shi@...el.com> wrote:
> On 02/12/2013 06:26 PM, Peter Zijlstra wrote:
>> On Thu, 2013-01-24 at 11:06 +0800, Alex Shi wrote:
>>> + /*
>>> + * set the initial load avg of new task same as its load
>>> + * in order to avoid brust fork make few cpu too heavier
>>> + */
>>> + if (flags & ENQUEUE_NEWTASK)
>>> + se->avg.load_avg_contrib = se->load.weight;
>>
>> I seem to have vague recollections of a discussion with pjt where we
>> talk about the initial behaviour of tasks; from this haze I had the
>> impression that new tasks should behave like full weight..
>>
>
> Here just make the new task has full weight..
>
>> PJT is something more fundamental screwy?
>>
So tasks get the quotient of their runnability over the period. Given
the period initially is equivalent to runnability it's definitely the
*intent* to start at full-weight and ramp-down.
Thinking on it, perhaps this is running a-foul of amortization -- in
that we only recompute this quotient on each 1024ns boundary; perhaps
in the fork-bomb case we're too slow to accumulate these.
Alex, does something like the following help? This would force an
initial __update_entity_load_avg_contrib() update the first time we
see the task.
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 1dff78a..9d1c193 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1557,8 +1557,8 @@ static void __sched_fork(struct task_struct *p)
* load-balance).
*/
#if defined(CONFIG_SMP) && defined(CONFIG_FAIR_GROUP_SCHED)
- p->se.avg.runnable_avg_period = 0;
- p->se.avg.runnable_avg_sum = 0;
+ p->se.avg.runnable_avg_period = 1024;
+ p->se.avg.runnable_avg_sum = 1024;
#endif
#ifdef CONFIG_SCHEDSTATS
memset(&p->se.statistics, 0, sizeof(p->se.statistics));
>
>
> --
> Thanks
> Alex
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists