lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 28 Jul 2014 13:39:39 +0200
From:	Peter Zijlstra <peterz@...radead.org>
To:	Yuyang Du <yuyang.du@...el.com>
Cc:	mingo@...hat.com, linux-kernel@...r.kernel.org, pjt@...gle.com,
	bsegall@...gle.com, arjan.van.de.ven@...el.com,
	len.brown@...el.com, rafael.j.wysocki@...el.com,
	alan.cox@...el.com, mark.gross@...el.com, fengguang.wu@...el.com
Subject: Re: [PATCH 2/2 v4] sched: Rewrite per entity runnable load average
 tracking

On Fri, Jul 18, 2014 at 07:26:06AM +0800, Yuyang Du wrote:

> -static inline void __update_tg_runnable_avg(struct sched_avg *sa,
> -						  struct cfs_rq *cfs_rq)
> -{
> -	struct task_group *tg = cfs_rq->tg;
> -	long contrib;
> -
> -	/* The fraction of a cpu used by this cfs_rq */
> -	contrib = div_u64((u64)sa->runnable_avg_sum << NICE_0_SHIFT,
> -			  sa->runnable_avg_period + 1);
> -	contrib -= cfs_rq->tg_runnable_contrib;
> -
> -	if (abs(contrib) > cfs_rq->tg_runnable_contrib / 64) {
> -		atomic_add(contrib, &tg->runnable_avg);
> -		cfs_rq->tg_runnable_contrib += contrib;
> -	}
> -}


> -static inline void __update_group_entity_contrib(struct sched_entity *se)
> +static inline void update_tg_load_avg(struct cfs_rq *cfs_rq)
>  {
> +	long delta = cfs_rq->avg.load_avg - cfs_rq->tg_load_avg_contrib;
>  
> +	if (delta) {
> +		atomic_long_add(delta, &cfs_rq->tg->load_avg);
> +		cfs_rq->tg_load_avg_contrib = cfs_rq->avg.load_avg;
>  	}
>  }

We talked about this before, you made that an unconditional atomic op on
an already hot line.

You need some words on why this isn't a problem. Either in a comment or
in the Changelog. You cannot leave such changes undocumented.

> +#define subtract_until_zero(minuend, subtrahend)	\
> +	(subtrahend < minuend ? minuend - subtrahend : 0)

WTH is a minuend or subtrahend? Are you a wordsmith in your spare time
and like to make up your own words?

Also, isn't writing: x = max(0, x-y), far more readable to begin with?

> +/*
> + * Group cfs_rq's load_avg is used for task_h_load and update_cfs_share
> + * calc.
> + */
> +static inline int update_cfs_rq_load_avg(u64 now, struct cfs_rq *cfs_rq)
>  {
> +	int decayed;
>  
> +	if (atomic_long_read(&cfs_rq->removed_load_avg)) {
> +		long r = atomic_long_xchg(&cfs_rq->removed_load_avg, 0);
> +		cfs_rq->avg.load_avg = subtract_until_zero(cfs_rq->avg.load_avg, r);
> +		r *= LOAD_AVG_MAX;
> +		cfs_rq->avg.load_sum = subtract_until_zero(cfs_rq->avg.load_sum, r);
>  	}
>  
> +	decayed = __update_load_avg(now, &cfs_rq->avg, cfs_rq->load.weight);
>  
> +#ifndef CONFIG_64BIT
> +	if (cfs_rq->avg.last_update_time != cfs_rq->load_last_update_time_copy) {
> +		smp_wmb();
> +		cfs_rq->load_last_update_time_copy = cfs_rq->avg.last_update_time;
> +	}
> +#endif
>  
> +	return decayed;
> +}

Its a bit unfortunate that we update the copy in another function than
the original, but I think I see why you did that. But is it at all
likely that we do not need to update? That is, does that compare make
any sense?



Content of type "application/pgp-signature" skipped

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ