lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-Id: <1203323815.5443.4.camel@homer.simson.net>
Date:	Mon, 18 Feb 2008 09:36:55 +0100
From:	Mike Galbraith <efault@....de>
To:	vatsa@...ux.vnet.ibm.com
Cc:	Ingo Molnar <mingo@...e.hu>,
	Lukas Hejtmanek <xhejtman@....muni.cz>, mingo@...hat.com,
	linux-kernel@...r.kernel.org, a.p.zijlstra@...llo.nl,
	dhaval@...ux.vnet.ibm.com, aneesh.kumar@...ibm.com
Subject: Re: 2.6.24-git4+ regression


On Mon, 2008-02-18 at 13:50 +0530, Srivatsa Vaddagiri wrote:
> On Mon, Feb 18, 2008 at 08:38:24AM +0100, Mike Galbraith wrote:
> > Here, it does not.  It seems fine without CONFIG_FAIR_GROUP_SCHED.
> 
> My hunch is its because of the vruntime driven preemption which shoots
> up latencies (and the fact perhaps that Peter hasnt't focused more on SMP case
> yet!).
> 
> Curiously, do you observe the same results when in UP mode (maxcpus=1)?

No, I only see bad latency with SMP.

> FWIW, my test patch I had sent earlier didnt address the needs of UP, as Peter 
> pointed me out. In that direction, I had done more experimentation with the 
> patch below, which seemed to improve UP latencies also. Note that I
> don't particularly like the first hunk below, perhaps it needs to be
> surrounded by an if(something) ..

I'll try this patch later (errands).

	Thanks,

	-Mike

> 
> ---
>  kernel/sched_fair.c |   12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> Index: current/kernel/sched_fair.c
> ===================================================================
> --- current.orig/kernel/sched_fair.c
> +++ current/kernel/sched_fair.c
> @@ -523,8 +523,6 @@ place_entity(struct cfs_rq *cfs_rq, stru
>  		if (sched_feat(NEW_FAIR_SLEEPERS))
>  			vruntime -= sysctl_sched_latency;
>  
> -		/* ensure we never gain time by being placed backwards. */
> -		vruntime = max_vruntime(se->vruntime, vruntime);
>  	}
>  
>  	se->vruntime = vruntime;
> @@ -816,6 +814,13 @@ hrtick_start_fair(struct rq *rq, struct 
>  }
>  #endif
>  
> +static inline void dequeue_stack(struct sched_entity *se)
> +{
> +	for_each_sched_entity(se)
> +		if (se->on_rq)
> +			dequeue_entity(cfs_rq_of(se), se, 0);
> +}
> +
>  /*
>   * The enqueue_task method is called before nr_running is
>   * increased. Here we update the fair scheduling stats and
> @@ -828,6 +833,9 @@ static void enqueue_task_fair(struct rq 
>  			    *topse = NULL;	/* Highest schedulable entity */
>  	int incload = 1;
>  
> +	if (wakeup)
> +		dequeue_stack(se);
> +
>  	for_each_sched_entity(se) {
>  		topse = se;
>  		if (se->on_rq) {
> 
> 
> 
> P.S : Sorry about slow responses, since I am now in a different project :(
> 

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ