lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150722021954.GC2882@fixme-laptop.cn.ibm.com>
Date:	Wed, 22 Jul 2015 10:19:54 +0800
From:	Boqun Feng <boqun.feng@...il.com>
To:	Yuyang Du <yuyang.du@...el.com>
Cc:	mingo@...nel.org, peterz@...radead.org,
	linux-kernel@...r.kernel.org, pjt@...gle.com, bsegall@...gle.com,
	morten.rasmussen@....com, vincent.guittot@...aro.org,
	dietmar.eggemann@....com, umgwanakikbuti@...il.com,
	len.brown@...el.com, rafael.j.wysocki@...el.com,
	arjan@...ux.intel.com, fengguang.wu@...el.com
Subject: Re: [PATCH v10 6/7] sched: Provide runnable_load_avg back to cfs_rq

On Wed, Jul 15, 2015 at 08:04:41AM +0800, Yuyang Du wrote:
> The cfs_rq's load_avg is composed of runnable_load_avg and blocked_load_avg.
> Before this series, sometimes the runnable_load_avg is used, and sometimes
> the load_avg is used. Completely replacing all uses of runnable_load_avg
> with load_avg may be too big a leap, i.e., the blocked_load_avg is concerned
> to result in overrated load. Therefore, we get runnable_load_avg back.
> 
> The new cfs_rq's runnable_load_avg is improved to be updated with all of the
> runnable sched_eneities at the same time, so the one sched_entity updated and
> the others stale problem is solved.
> 
> Signed-off-by: Yuyang Du <yuyang.du@...el.com>
> ---

<snip>

> +/* Remove the runnable load generated by se from cfs_rq's runnable load average */
> +static inline void
> +dequeue_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
> +{
> +	update_load_avg(se, 1);
> +

I think we need an update_cfs_rq_load_avg() here? Because the
runnable_load_avg may not be up to date when dequeue_entity_load_avg()
is called, right?

> +	cfs_rq->runnable_load_avg =
> +		max_t(long, cfs_rq->runnable_load_avg - se->avg.load_avg, 0);
> +	cfs_rq->runnable_load_sum =
> +		max_t(s64, cfs_rq->runnable_load_sum - se->avg.load_sum, 0);
> +}
> +
>  /*
>   * Task first catches up with cfs_rq, and then subtract
>   * itself from the cfs_rq (task must be off the queue now).

<snip>

> @@ -2982,7 +3015,7 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
>  	 * Update run-time statistics of the 'current'.
>  	 */
>  	update_curr(cfs_rq);
> -	update_load_avg(se, 1);
> +	dequeue_entity_load_avg(cfs_rq, se);
>  
>  	update_stats_dequeue(cfs_rq, se);
>  	if (flags & DEQUEUE_SLEEP) {

Thanks and Best Regards,
Boqun

Download attachment "signature.asc" of type "application/pgp-signature" (474 bytes)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ