lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1341489508.19870.30.camel@laptop>
Date:	Thu, 05 Jul 2012 13:58:28 +0200
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Paul Turner <pjt@...gle.com>
Cc:	linux-kernel@...r.kernel.org, Venki Pallipadi <venki@...gle.com>,
	Srivatsa Vaddagiri <vatsa@...ibm.com>,
	Vincent Guittot <vincent.guittot@...aro.org>,
	Nikunj A Dadhania <nikunj@...ux.vnet.ibm.com>
Subject: Re: [PATCH 12/16] sched: refactor update_shares_cpu() ->
 update_blocked_avgs()

On Wed, 2012-06-27 at 19:24 -0700, Paul Turner wrote:
> Now that running entities maintain their own load-averages the work we must do
> in update_shares() is largely restricted to the periodic decay of blocked
> entities.  This allows us to be a little less pessimistic regarding our
> occupancy on rq->lock and the associated rq->clock updates required.

So what you're saying is that since 'weight' now includes runtime
behaviour (where we hope the recent past matches the near future) we
don't need to update shares quite as often since the effect of
sleep-wakeup cycles isn't near as big since they're already anticipated.

So how is the decay of blocked load still significant, surely that too
is mostly part of the anticipated sleep/wake cycle already caught in the
runtime behaviour.

Or is this the primary place where we decay? If so that wasn't obvious
and thus wants a comment someplace.

> Signed-off-by: Paul Turner <pjt@...gle.com>
> ---

> +static void update_blocked_averages(int cpu)
>  {
>  	struct rq *rq = cpu_rq(cpu);
> +	struct cfs_rq *cfs_rq;
> +
> +	unsigned long flags;
> +	int num_updates = 0;
>  
>  	rcu_read_lock();
> +	raw_spin_lock_irqsave(&rq->lock, flags);
> +	update_rq_clock(rq);
>  	/*
>  	 * Iterates the task_group tree in a bottom up fashion, see
>  	 * list_add_leaf_cfs_rq() for details.
>  	 */
>  	for_each_leaf_cfs_rq(rq, cfs_rq) {
> +		__update_blocked_averages_cpu(cfs_rq->tg, rq->cpu);
>  
> +		/*
> +		 * Periodically release the lock so that a cfs_rq with many
> +		 * children cannot hold it for an arbitrary period of time.
> +		 */
> +		if (num_updates++ % 20 == 0) {
> +			raw_spin_unlock_irqrestore(&rq->lock, flags);
> +			cpu_relax();
> +			raw_spin_lock_irqsave(&rq->lock, flags);

Gack.. that's not real pretty is it.. Esp. since we're still holding RCU
lock and are thus (mostly) still not preemptable.

How much of a problem was this?, the changelog is silent on this.

> +			update_rq_clock(rq);
> +		}
>  	}
> +
> +	raw_spin_unlock_irqrestore(&rq->lock, flags);
>  	rcu_read_unlock();
>  }
>  



--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ