[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20110228035046.GB3005@in.ibm.com>
Date: Mon, 28 Feb 2011 09:20:46 +0530
From: Bharata B Rao <bharata@...ux.vnet.ibm.com>
To: Paul Turner <pjt@...gle.com>
Cc: Peter Zijlstra <a.p.zijlstra@...llo.nl>,
linux-kernel@...r.kernel.org,
Dhaval Giani <dhaval@...ux.vnet.ibm.com>,
Balbir Singh <balbir@...ux.vnet.ibm.com>,
Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
Gautham R Shenoy <ego@...ibm.com>,
Srivatsa Vaddagiri <vatsa@...ibm.com>,
Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
Ingo Molnar <mingo@...e.hu>,
Pavel Emelyanov <xemul@...nvz.org>,
Herbert Poetzl <herbert@...hfloor.at>,
Avi Kivity <avi@...hat.com>,
Chris Friesen <cfriesen@...tel.com>,
Nikhil Rao <ncrao@...gle.com>
Subject: Re: [CFS Bandwidth Control v4 3/7] sched: throttle cfs_rq entities
which exceed their local quota
On Fri, Feb 25, 2011 at 12:51:01PM -0800, Paul Turner wrote:
> On Fri, Feb 25, 2011 at 5:58 AM, Bharata B Rao
> <bharata@...ux.vnet.ibm.com> wrote:
> > On Thu, Feb 24, 2011 at 07:10:58PM -0800, Paul Turner wrote:
> >> On Wed, Feb 23, 2011 at 5:32 AM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> >> > On Tue, 2011-02-15 at 19:18 -0800, Paul Turner wrote:
> >>
> >> >> + update_cfs_load(cfs_rq, 0);
> >> >> +
> >> >> + /* prevent previous buddy nominations from re-picking this se */
> >> >> + clear_buddies(cfs_rq_of(se), se);
> >> >> +
> >> >> + /*
> >> >> + * It's possible for the current task to block and re-wake before task
> >> >> + * switch, leading to a throttle within enqueue_task->update_curr()
> >> >> + * versus an an entity that has not technically been enqueued yet.
> >> >
> >> > I'm not quite seeing how this would happen.. care to expand on this?
> >> >
> >>
> >> I'm not sure the example Bharata gave is correct -- I'm going to treat
> >> that discussion separately as it's not the intent here.
> >
> > Just for the record, my examples were not given for the above question from
> > Peter.
> >
> > I answered two questions and I am tempted to stand by those until proven
> > wrong :)
>
> This is important to get right, I'm happy to elaborate.
>
> >
> > 1. Why do we have cfs_rq_throtted() check in dequeue_task_fair() ?
>
> The check is primarily needed because we could become throttled as
> part of a regular dequeue. At which point we bail because the parent
> dequeue is actually complete.
>
> (Were it necessitated by load balance we could actually not do this
> and just perform a hierarchal check within load_balance_fair)
>
> > ( => How could we be running if our parent was throttled ?)
> >
>
> The only way we can be running if our parent was throttled is if /we/
> triggered that throttle and have been marked for re-schedule.
>
> > Consider the following hierarchy.
> >
> > Root Group
> > |
> > |
> > Group 1 (Bandwidth constrained group)
> > |
> > |
> > Group 2 (Infinite runtime group)
> >
> > Assume both the groups have tasks in them.
> >
> > When Group 1 is throttled, its cfs_rq is marked throttled, and is removed from
> > Root group's runqueue. But leaf tasks in Group 2 continue to be enqueued in
> > Group 1's runqueue.
> >
>
> Yes, the hierarchy state is maintained in isolation.
>
> > Load balancer kicks in on CPU A and figures out that it can pull a few tasks
> > from CPU B (busiest_cpu). It iterates through all the task groups
> > (load_balance_fair) and considers Group 2 also. It tries to pull a task from
> > CPU B's cfs_rq for Group 2. I don't see anything that would prevent the
> > load balancer from bailing out here.
>
> Per above, the descendants of a throttled group are also identified
> (and appropriately skipped) using h_load.
This bit is still unclear to me. We do nothing in tg_load_down() to treat
throttled cfs_rqs differently when calculating h_load. Nor do we do
anything in load_balance_fair() to explicitly identify descendents of
throttled group using h_load AFAICS. All we have is
cfs_rq_throttled() check, which I think should be converted to entity_on_rq()
to check for the throttled hierarchy and discard pulling from throttled
hierarchies.
Regards,
Bharata.
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists