lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Thu, 24 Feb 2011 19:41:25 -0800
From:	Paul Turner <pjt@...gle.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Cc:	bharata@...ux.vnet.ibm.com, linux-kernel@...r.kernel.org,
	Dhaval Giani <dhaval.giani@...il.com>,
	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	Vaidyanathan Srinivasan <svaidy@...ux.vnet.ibm.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Kamalesh Babulal <kamalesh@...ux.vnet.ibm.com>,
	Ingo Molnar <mingo@...e.hu>,
	Pavel Emelyanov <xemul@...nvz.org>,
	Herbert Poetzl <herbert@...hfloor.at>,
	Avi Kivity <avi@...hat.com>,
	Chris Friesen <cfriesen@...tel.com>,
	Nikhil Rao <ncrao@...gle.com>
Subject: Re: [CFS Bandwidth Control v4 3/7] sched: throttle cfs_rq entities
 which exceed their local quota

On Thu, Feb 24, 2011 at 3:05 AM, Peter Zijlstra <a.p.zijlstra@...llo.nl> wrote:
> On Thu, 2011-02-24 at 10:51 +0530, Bharata B Rao wrote:
>> Hi Peter,
>>
>> I will only answer a couple of your questions and let Paul clarify the rest...
>>
>> On Wed, Feb 23, 2011 at 02:32:13PM +0100, Peter Zijlstra wrote:
>> > On Tue, 2011-02-15 at 19:18 -0800, Paul Turner wrote:
>> >
>> >
>> > > @@ -1363,6 +1407,9 @@ enqueue_task_fair(struct rq *rq, struct
>> > >                   break;
>> > >           cfs_rq = cfs_rq_of(se);
>> > >           enqueue_entity(cfs_rq, se, flags);
>> > > +         /* don't continue to enqueue if our parent is throttled */
>> > > +         if (cfs_rq_throttled(cfs_rq))
>> > > +                 break;
>> > >           flags = ENQUEUE_WAKEUP;
>> > >   }
>> > >
>> > > @@ -1390,8 +1437,11 @@ static void dequeue_task_fair(struct rq
>> > >           cfs_rq = cfs_rq_of(se);
>> > >           dequeue_entity(cfs_rq, se, flags);
>> > >
>> > > -         /* Don't dequeue parent if it has other entities besides us */
>> > > -         if (cfs_rq->load.weight)
>> > > +         /*
>> > > +          * Don't dequeue parent if it has other entities besides us,
>> > > +          * or if it is throttled
>> > > +          */
>> > > +         if (cfs_rq->load.weight || cfs_rq_throttled(cfs_rq))
>> > >                   break;
>> > >           flags |= DEQUEUE_SLEEP;
>> > >   }
>> >
>> > How could we even be running if our parent was throttled?

The only way this can happen is if we are on our way out.  The example
given doesn't apply to this case I don't think
>>
>> The task isn't running actually. One of its parents up in the heirarchy has
>> been throttled and been already dequeued. Now this task sits on its immediate
>> parent's runqueue which isn't throttled but not really running also since
>> the hierarchy is throttled.
>> In this situation, load balancer can try to pull
>> this task. When that happens, load balancer tries to dequeue it and this
>> check will ensure that we don't attempt to dequeue a group entity in our
>> hierarchy which has already been dequeued.
>
> That's insane, its throttled, that means it should be dequeued and
> should thus invisible for the load-balancer.

I agree.  We ensure this does not happen by making the h_load zero.
Something I thought I was doing but apparently not, will fix in
repost.

> load-balancer will try and move tasks around to balance load, but all in
> vain, it'll move phantom loads around and get most confused at best.
>

Yeah this shouldn't happen, I don't think this example is a valid one.

> Pure and utter suckage if you ask me.
>
>> > > @@ -1438,10 +1524,16 @@ static void account_cfs_rq_quota(struct
>> > >
>> > >   cfs_rq->quota_used += delta_exec;
>> > >
>> > > - if (cfs_rq->quota_used < cfs_rq->quota_assigned)
>> > > + if (cfs_rq_throttled(cfs_rq) ||
>> > > +         cfs_rq->quota_used < cfs_rq->quota_assigned)
>> > >           return;
>> >
>> > So we are throttled but running anyway, I suppose this comes from the PI
>> > ceiling muck?
>>
>> When a cfs_rq is throttled, its representative se (and all its parent
>> se's) get dequeued and the task is marked for resched. But the task entity is
>> still on its throttled parent's cfs_rq (=> task->se.on_rq = 1). Next during
>> put_prev_task_fair(), we enqueue the task back on its throttled parent's
>> cfs_rq at which time we end up calling update_curr() on throttled cfs_rq.
>> This check would help us bail out from that situation.
>
> But why bother with this early exit? At worst you'll call
> tg_request_cfs_quota() in vain, at best you'll find there is runtime
> because the period tick just happened on another cpu and you're good to
> go, yay!

It's for the non-preemptible case where we could be running for non
trivial time after reschedule()

Considering your second point I suppose there could be a micro-benefit
in checking in case the period tick did just happen to occur and then
self unthrottling... but I don't think it's really worth it.


>
>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists