lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 02 May 2008 13:46:13 -0500
From:	Joel Schopp <jschopp@...tin.ibm.com>
To:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
CC:	Ingo Molnar <mingo@...e.hu>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
	Anton Blanchard <anton@....ibm.com>,
	Benjamin Herrenschmidt <benh@...nel.crashing.org>
Subject: Re: questions on calc_delta_mine() in sched.c

> This one builds and... boots

I'll try to test in on my end.

> +	struct load_weight lw_cache[4];
> +	int lw_cache_idx;
> +
>  	struct cfs_rq cfs;
>  	struct rt_rq rt;
>  
> @@ -1438,8 +1441,24 @@ calc_delta_mine(unsigned long delta_exec
>  {
>  	u64 tmp;
>  
> -	if (unlikely(!lw->inv_weight))
> -		lw->inv_weight = (WMULT_CONST-lw->weight/2) / (lw->weight+1);
> +	if (!lw->inv_weight) {

Yep, got to get rid of unlikely.

> +		struct rq *rq = cpu_rq(smp_processor_id());
> +		unsigned long weight = lw->weight;
> +		int i;
> +
> +		for (i = 0; i < ARRAY_SIZE(rq->lw_cache); i++) {
> +			if (rq->lw_cache[i].weight == weight)
> +				lw->inv_weight = rq->lw_cache[i].inv_weight;
> +			goto got_inv;
> +		}
> +		if (unlikely(!weight))
> +			weight++;
> +		lw->inv_weight = 1 + (WMULT_CONST - weight/2) / weight;

I bet just dividing by weight + 1 unconditionally would be cheaper than 
doing the test and shouldn't skew results too badly.

> +		rq->lw_cache[rq->lw_cache_idx] = *lw;
> +		rq->lw_cache_idx++;
> +		rq->lw_cache_idx %= ARRAY_SIZE(rq->lw_cache);
> +	}
> + got_inv:

Doctor, I think the cure is worse than the disease.   I'd expect that even 
if all these extra loads hit cache they should together be more expensive 
than the divide they save.   Not that I have any better solutions myself.

>  
>  	tmp = (u64)delta_exec * weight;
>  	/*
> @@ -8025,7 +8044,7 @@ static void init_tg_cfs_entry(struct tas
>  
>  	se->my_q = cfs_rq;
>  	se->load.weight = tg->shares;
> -	se->load.inv_weight = div64_64(1ULL<<32, se->load.weight);
> +	se->load.inv_weight = 0;
>  	se->parent = parent;
>  }
>  #endif
> @@ -8692,7 +8711,7 @@ static void __set_se_shares(struct sched
>  		dequeue_entity(cfs_rq, se, 0);
>  
>  	se->load.weight = shares;
> -	se->load.inv_weight = div64_64((1ULL<<32), shares);
> +	se->load.inv_weight = 0;
>  
>  	if (on_rq)
>  		enqueue_entity(cfs_rq, se, 0);
> 
> 

I think a patch to get rid of unlikely and to change these two div64_64 to 
0s should be pushed up.  Not sure what we do about the divide.


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ