lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 09 Mar 2009 16:57:23 +0100
From:	Balazs Scheidler <bazsi@...abit.hu>
To:	Mike Galbraith <efault@....de>
Cc:	Ingo Molnar <mingo@...e.hu>, linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Willy Tarreau <w@....eu>
Subject: Re: [patch] Re: scheduler oddity [bug?]

Hi,

Just an interesting sidenote:

I've ported the quoted patch and
38736f475071b80b66be28af7b44c854073699cc  (the one I've found via
bisect) to 2.6.27 but these didn't resolve my scheduling problem, both
my test program and my application still uses only one CPU. So probably
the rest of the scheduling patches between 2.6.27..2.6.28 have some
effect too.

On Mon, 2009-03-09 at 09:02 +0100, Mike Galbraith wrote:
> On Sun, 2009-03-08 at 18:52 +0100, Ingo Molnar wrote:
> > * Mike Galbraith <efault@....de> wrote:
> > 
> > > On Sun, 2009-03-08 at 16:39 +0100, Ingo Molnar wrote:
> > > > * Mike Galbraith <efault@....de> wrote:
> > > > 
> > > > > The problem with your particular testcase is that while one 
> > > > > half has an avg_overlap (what we use as affinity hint for 
> > > > > synchronous wakeups) which triggers the affinity hint, the 
> > > > > other half has avg_overlap of zero, what it was born with, so 
> > > > > despite significant execution overlap, the scheduler treats 
> > > > > them as if they were truly synchronous tasks.
> > > > 
> > > > hm, why does it stay on zero?
> > > 
> > > Wakeup preemption.  Presuming here: heavy task wakes light 
> > > task, is preempted, light task stuffs data into pipe, heavy 
> > > task doesn't block, so no avg_overlap is ever computed.  The 
> > > heavy task uses 100% CPU.
> > > 
> > > Running as SCHED_BATCH (virgin source), it becomes sane.
> > 
> > ah.
> > 
> > I'd argue then that time spent on the rq preempted _should_ 
> > count in avg_overlap statistics. I.e. couldnt we do something 
> > like ... your patch? :)
> > 
> > > >     if (sleep && p->se.last_wakeup) {
> > > >             update_avg(&p->se.avg_overlap,
> > > >                        p->se.sum_exec_runtime - p->se.last_wakeup);
> > > >             p->se.last_wakeup = 0;
> > > > -   }
> > > > +   } else if (p->se.avg_overlap < limit && runtime >= limit)
> > > > +           update_avg(&p->se.avg_overlap, runtime);
> > 
> > Just done unconditionally, i.e. something like:
> > 
> > 	if (sleep) {
> > 		runtime = p->se.sum_exec_runtime - p->se.last_wakeup;
> > 		p->se.last_wakeup = 0;
> > 	} else {
> > 		runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
> > 	}
> > 
> > 	update_avg(&p->se.avg_overlap, runtime);
> > 
> > ?
> 
> OK, I've not seen any problem indications yet, so find patchlet below.
> 
> However! Balazs has stated that this problem is _not_ present in .git,
> and that..
> 
> 	commit 38736f475071b80b66be28af7b44c854073699cc
> 	Author: Gautham R Shenoy <ego@...ibm.com>
> 	Date:   Sat Sep 6 14:50:23 2008 +0530
> 
> ..is what fixed it.  Willy Tarreau verified this as being the case on
> his HW as well.  It is present in .git with my HW.
> 
> I see it as a problem, but it's your call.  Dunno if I'd apply it or
> hold back, given these conflicting reports.
> 
> Anyway...
> 
> Given a task pair communicating via pipe, if one partner fills/drains such
> that the other does not block for extended periods, avg_overlap can be long
> stale, and trigger affine wakeups despite heavy CPU demand.  This can, and
> does lead to throughput loss in the testcase posted by the reporter.
> 
> Fix this by unconditionally updating avg_overlap at dequeue time instead
> of only updating when a task sleeps.
> 
> See http://lkml.org/lkml/2009/3/7/79 for details/testcase.
> 
> Reported-by: Balazs Scheidler <bazsi@...abit.hu>
> Signed-off-by: Mike Galbraith <efault@....de>
> 
>  kernel/sched.c |    9 +++++++--
>  1 files changed, 7 insertions(+), 2 deletions(-)
> 
> diff --git a/kernel/sched.c b/kernel/sched.c
> index 8e2558c..c670050 100644
> --- a/kernel/sched.c
> +++ b/kernel/sched.c
> @@ -1712,12 +1712,17 @@ static void enqueue_task(struct rq *rq, struct task_struct *p, int wakeup)
>  
>  static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
>  {
> +	u64 runtime;
> +
>  	if (sleep && p->se.last_wakeup) {
> -		update_avg(&p->se.avg_overlap,
> -			   p->se.sum_exec_runtime - p->se.last_wakeup);
> +		runtime = p->se.sum_exec_runtime - p->se.last_wakeup;
>  		p->se.last_wakeup = 0;
> +	} else {
> +		runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
>  	}
>  
> +	update_avg(&p->se.avg_overlap, runtime);
> +
>  	sched_info_dequeued(p);
>  	p->sched_class->dequeue_task(rq, p, sleep);
>  	p->se.on_rq = 0;
> 
> 
> 
> 
-- 
Bazsi


--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ