[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <559ABCB8.6020209@arm.com>
Date: Mon, 06 Jul 2015 18:36:56 +0100
From: Dietmar Eggemann <dietmar.eggemann@....com>
To: Yuyang Du <yuyang.du@...el.com>,
Morten Rasmussen <Morten.Rasmussen@....com>
CC: Mike Galbraith <umgwanakikbuti@...il.com>,
Peter Zijlstra <peterz@...radead.org>,
Rabin Vincent <rabin.vincent@...s.com>,
"mingo@...hat.com" <mingo@...hat.com>,
"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
Paul Turner <pjt@...gle.com>, Ben Segall <bsegall@...gle.com>
Subject: Re: [PATCH?] Livelock in pick_next_task_fair() / idle_balance()
Hi Yuyang,
On 05/07/15 21:12, Yuyang Du wrote:
> Hi Morten,
>
> On Fri, Jul 03, 2015 at 10:34:41AM +0100, Morten Rasmussen wrote:
>>>> IOW, since task groups include blocked load in the load_avg_contrib (see
>>>> __update_group_entity_contrib() and __update_cfs_rq_tg_load_contrib()) the
>>>> imbalance includes blocked load and hence env->imbalance >=
>>>> sum(task_h_load(p)) for all tasks p on the rq. Which leads to
>>>> detach_tasks() emptying the rq completely in the reported scenario where
>>>> blocked load > runnable load.
>>>
>>> Whenever I want to know the load avg concerning task group, I need to
>>> walk through the complete codes again, I prefer not to do it this time.
>>> But it should not be that simply to say "the 118 comes from the blocked load".
>>
>> But the whole hierarchy of group entities is updated each time we enqueue
>> or dequeue a task. I don't see how the group entity load_avg_contrib is
>> not up to date? Why do you need to update it again?
>>
>> In any case, we have one task in the group hierarchy which has a
>> load_avg_contrib of 0 and the grand-grand parent group entity has a
>> load_avg_contrib of 118 and no additional tasks. That load contribution
>> must be from tasks which are no longer around on the rq? No?
>
> load_avg_contrib has WEIGHT inside, so the most I can say is:
> SE: 8f456e00's load_avg_contrib 118 = (its cfs_rq's runnable + blocked) / (tg->load_avg + 1) * tg->shares
>
> The tg->shares is probably 1024 (at least 911). So we are just left with:
>
> cfs_rq / tg = 11.5%
>
> I myself did question the sudden jump from 0 to 118 (see a previous reply).
Do you mean the jump from system-rngd.slice (0) (tg.css.id=3) to
system.slice (118) (tg.css.id=2)?
Maybe, the 118 might come from another tg hierarchy (w/ tg.css.id >= 3)
inside the system.slice group representing another service.
Rabin, could you share the content of your
/sys/fs/cgroup/cpu/system.slice directory and of /proc/cgroups ?
Whether 118 comes from the cfs_rq->blocked_load_avg of one of the tg
levels of one of the other system.slice tg hierarchies or it results
from not updating the se.avg.load_avg_contrib values of se's
representing tg's immediately is not that important I guess.
Even if we're able to sync both things (task en/dequeue and tg
se.avg.load_avg_contrib update) perfectly (by calling
update_cfs_rq_blocked_load() always w/ force_update=1 and immediately
after that update_entity_load_avg() for all tg se's in one hierarchy, we
would still have to deal w/ the blocked load part if the tg se
representing system.slice contributes to
cpu_rq(cpu)->cfs.runnable_load_avg.
-- Dietmar
>
> But anyway, this really is irrelevant to the discusstion.
>
[...]
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists