lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1231142044.30237.0.camel@twins>
Date:	Mon, 05 Jan 2009 08:54:04 +0100
From:	Peter Zijlstra <a.p.zijlstra@...llo.nl>
To:	Mike Galbraith <efault@....de>
Cc:	Jayson King <dev@...sonking.com>, linux-kernel@...r.kernel.org,
	mingo@...e.hu
Subject: Re: [patch] Re: problem with "sched: revert back to per-rq
 vruntime"?

On Fri, 2009-01-02 at 12:16 +0100, Mike Galbraith wrote:
> On Thu, 2009-01-01 at 18:14 -0600, Jayson King wrote:
> 
> > Still works OK for me. You may add, if you like:
> > 
> > Tested-By: Jayson King <dev@...sonking.com>
> 
> Actually, I prefer the below.  Everything in one spot and obvious.
> 
> Impact: bug fixlet.
> 
> Fix sched_slice() to emit a sane result whether a task is currently enqueued or not.
> 
> Signed-off-by: Mike Galbraith <efault@....de>

Looks good, thanks Mike!!

>  kernel/sched_fair.c |   30 ++++++++++++------------------
>  1 files changed, 12 insertions(+), 18 deletions(-)
> 
> diff --git a/kernel/sched_fair.c b/kernel/sched_fair.c
> index 5ad4440..1a35bad 100644
> --- a/kernel/sched_fair.c
> +++ b/kernel/sched_fair.c
> @@ -386,20 +386,6 @@ int sched_nr_latency_handler(struct ctl_table *table, int write,
>  #endif
>  
>  /*
> - * delta *= P[w / rw]
> - */
> -static inline unsigned long
> -calc_delta_weight(unsigned long delta, struct sched_entity *se)
> -{
> -	for_each_sched_entity(se) {
> -		delta = calc_delta_mine(delta,
> -				se->load.weight, &cfs_rq_of(se)->load);
> -	}
> -
> -	return delta;
> -}
> -
> -/*
>   * delta /= w
>   */
>  static inline unsigned long
> @@ -440,12 +426,20 @@ static u64 __sched_period(unsigned long nr_running)
>   */
>  static u64 sched_slice(struct cfs_rq *cfs_rq, struct sched_entity *se)
>  {
> -	unsigned long nr_running = cfs_rq->nr_running;
> +	u64 slice = __sched_period(cfs_rq->nr_running + !se->on_rq);
> +	
> +	for_each_sched_entity(se) {
> +		struct load_weight *load = &cfs_rq->load;
>  
> -	if (unlikely(!se->on_rq))
> -		nr_running++;
> +		if (unlikely(!se->on_rq)) {
> +			struct load_weight lw = cfs_rq->load;
>  
> -	return calc_delta_weight(__sched_period(nr_running), se);
> +			update_load_add(&lw, se->load.weight);
> +			load = &lw;
> +		}
> +		slice = calc_delta_mine(slice, se->load.weight, load);
> +	}
> +	return slice;
>  }
>  
>  /*
> 
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ