lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20140121193759.GR11314@laptop.programming.kicks-ass.net>
Date:	Tue, 21 Jan 2014 20:37:59 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	bsegall@...gle.com
Cc:	linux-kernel@...r.kernel.org, mingo@...nel.org,
	daniel.lezcano@...aro.org, pjt@...gle.com
Subject: Re: [PATCH 8/9] sched/fair: Optimize cgroup pick_next_task_fair

On Tue, Jan 21, 2014 at 11:24:39AM -0800, bsegall@...gle.com wrote:
> Peter Zijlstra <peterz@...radead.org> writes:

> > +#ifdef CONFIG_FAIR_GROUP_SCHED
> > +	/*
> > +	 * If we haven't yet done put_prev_entity and the selected task is
> > +	 * a different task than we started out with, try and touch the least
> > +	 * amount of cfs_rq trees.
> > +	 */
> > +	if (prev) {
> > +		if (prev != p) {
> > +			pse = &prev->se;
> > +
> > +			while (!(cfs_rq = is_same_group(se, pse))) {
> > +				int se_depth = se->depth;
> > +				int pse_depth = pse->depth;
> > +
> > +				if (se_depth <= pse_depth) {
> > +					put_prev_entity(cfs_rq_of(pse), pse);
> > +					pse = parent_entity(pse);
> > +				}
> > +				if (se_depth >= pse_depth) {
> > +					set_next_entity(cfs_rq_of(se), se);
> > +					se = parent_entity(se);
> > +				}
> > +			}
> >  
> > +			put_prev_entity(cfs_rq, pse);
> > +			set_next_entity(cfs_rq, se);
> > +		}

(A)

> > +		/*
> > +		 * In case the common cfs_rq got throttled, just give up and
> > +		 * put the stack and retry.
> > +		 */
> > +		if (unlikely(check_cfs_rq_runtime(cfs_rq))) {
> > +			put_prev_task_fair(rq, p);
> > +			prev = NULL;
> > +			goto again;
> > +		}
> 
> This double-calls put_prev_entity on any non-common cfs_rqs and ses,
> which means double __enqueue_entity, among other things. Just doing the
> put_prev loop from se->parent should fix that.

I'm not seeing that, so at point (A) we've completely switched over from
@prev to @p, we've put all pse until the common parent and set all se
back to @p.

So if we then do: put_prev_task_fair(rq, p), we simply undo all the
set_next_entity(se) we just did, and continue from the common parent
upwards.

> However, any sort of abort means that we may have already done
> set_next_entity on some children, which even with the changes to
> pick_next_entity will cause problems, up to and including double
> __dequeue_entity I think.

But the abort is only done after we've completely set up @p as the
current task.

Yes, completely tearing it down again is probably a waste, but given
that bandwidth enforcement should be rare and I didn't want to
complicate things even further for rare cases.

> Also, this way we never do check_cfs_rq_runtime on any parents of the
> common cfs_rq, which could even have been the reason for the resched to
> begin with. I'm not sure if there would be any problem doing it on the
> way down or not, I don't see any problems at a glance.

Oh, so we allow a parent to have less runtime than the sum of all its
children?

Indeed, in that case we can miss something... we could try to call
check_cfs_rq_runtime() from the initial top-down selection loop? When
true, just put the entire stack and don't pretend to be smart?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ