lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20131210114825.GF12849@twins.programming.kicks-ass.net>
Date:	Tue, 10 Dec 2013 12:48:25 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Chris Redpath <chris.redpath@....com>
Cc:	pjt@...gle.com, mingo@...hat.com, alex.shi@...aro.org,
	morten.rasmussen@....com, dietmar.eggemann@....com,
	linux-kernel@...r.kernel.org
Subject: Re: [PATCH 2/2] sched: update runqueue clock before migrations away

On Mon, Dec 09, 2013 at 12:59:10PM +0000, Chris Redpath wrote:
> If we migrate a sleeping task away from a CPU which has the
> tick stopped, then both the clock_task and decay_counter will
> be out of date for that CPU and we will not decay load correctly
> regardless of how often we update the blocked load.
> 
> This is only an issue for tasks which are not on a runqueue
> (because otherwise that CPU would be awake) and simultaneously
> the CPU the task previously ran on has had the tick stopped.

OK, so the idiot in a hurry (me) isn't quite getting the issue.

Normally we update the blocked averages from the tick; clearly when no
tick, no update. So far so good.

Now, we also update blocked load from idle balance -- which would
include the CPUs without tick through nohz_idle_balance() -- however
this only appears to be done for CONFIG_FAIR_GROUP_SCHED.

Are you running without cgroup muck? If so should we make this
unconditional?

If you have cgroup muck enabled; what's the problem? Don't we run
nohz_idle_balance() frequently enough to be effective for updating the
blocked load?

You also seem to have overlooked NO_HZ_FULL, that can stop a tick even
when there's a running task and makes the situation even more fun.

> @@ -4343,6 +4344,25 @@ migrate_task_rq_fair(struct task_struct *p, int next_cpu)
>  	 * be negative here since on-rq tasks have decay-count == 0.
>  	 */
>  	if (se->avg.decay_count) {
> +		/*
> +		 * If we migrate a sleeping task away from a CPU
> +		 * which has the tick stopped, then both the clock_task
> +		 * and decay_counter will be out of date for that CPU
> +		 * and we will not decay load correctly.
> +		 */
> +		if (!se->on_rq && nohz_test_cpu(task_cpu(p))) {
> +			struct rq *rq = cpu_rq(task_cpu(p));
> +			unsigned long flags;
> +			/*
> +			 * Current CPU cannot be holding rq->lock in this
> +			 * circumstance, but another might be. We must hold
> +			 * rq->lock before we go poking around in its clocks
> +			 */
> +			raw_spin_lock_irqsave(&rq->lock, flags);
> +			update_rq_clock(rq);
> +			update_cfs_rq_blocked_load(cfs_rq, 0);
> +			raw_spin_unlock_irqrestore(&rq->lock, flags);
> +		}
>  		se->avg.decay_count = -__synchronize_entity_decay(se);
>  		atomic_long_add(se->avg.load_avg_contrib,
>  						&cfs_rq->removed_load);

Right, as Ben already said; taking a rq->lock there is unfortunate at
best.

So normally we 'throttle' the expense of decaying the blocked load to
ticks. But the above does it on every (suitable) task migration which
might be far more often.

So ideally we'd get it all sorted through the nohz_idle_balance() path;
what exactly are the problems with that?
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ