lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Sun, 08 Mar 2009 17:20:00 +0100
From:	Mike Galbraith <efault@....de>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	Balazs Scheidler <bazsi@...abit.hu>, linux-kernel@...r.kernel.org,
	Peter Zijlstra <a.p.zijlstra@...llo.nl>
Subject: Re: scheduler oddity [bug?]

On Sun, 2009-03-08 at 16:39 +0100, Ingo Molnar wrote:
> * Mike Galbraith <efault@....de> wrote:
> 
> > The problem with your particular testcase is that while one 
> > half has an avg_overlap (what we use as affinity hint for 
> > synchronous wakeups) which triggers the affinity hint, the 
> > other half has avg_overlap of zero, what it was born with, so 
> > despite significant execution overlap, the scheduler treats 
> > them as if they were truly synchronous tasks.
> 
> hm, why does it stay on zero?

Wakeup preemption.  Presuming here: heavy task wakes light task, is
preempted, light task stuffs data into pipe, heavy task doesn't block,
so no avg_overlap is ever computed.  The heavy task uses 100% CPU.

Running as SCHED_BATCH (virgin source), it becomes sane.

pipetest (6836, #threads: 1)
---------------------------------------------------------
se.exec_start                      :        266073.001296
se.vruntime                        :        173620.953443
se.sum_exec_runtime                :         11324.486321
se.avg_overlap                     :             1.306762
nr_switches                        :                  381
nr_voluntary_switches              :                    2
nr_involuntary_switches            :                  379
se.load.weight                     :                 1024
policy                             :                    3
prio                               :                  120
clock-delta                        :                  109

pipetest (6837, #threads: 1)
---------------------------------------------------------
se.exec_start                      :        266066.098182
se.vruntime                        :         51893.050177
se.sum_exec_runtime                :          2367.077751
se.avg_overlap                     :             0.077492
nr_switches                        :                  897
nr_voluntary_switches              :                  828
nr_involuntary_switches            :                   69
se.load.weight                     :                 1024
policy                             :                    3
prio                               :                  120
clock-delta                        :                  109

> >  static void dequeue_task(struct rq *rq, struct task_struct *p, int sleep)
> >  {
> > +	u64 limit = sysctl_sched_migration_cost;
> > +	u64 runtime = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime;
> > +
> >  	if (sleep && p->se.last_wakeup) {
> >  		update_avg(&p->se.avg_overlap,
> >  			   p->se.sum_exec_runtime - p->se.last_wakeup);
> >  		p->se.last_wakeup = 0;
> > -	}
> > +	} else if (p->se.avg_overlap < limit && runtime >= limit)
> > +		update_avg(&p->se.avg_overlap, runtime);
> >  
> >  	sched_info_dequeued(p);
> >  	p->sched_class->dequeue_task(rq, p, sleep);
> 
> hm, that's weird. We want to limit avg_overlap maintenance to 
> true sleeps only.

Except that when we stop sleeping, we're left with a stale avg_overlap.

> And this patch only makes a difference in the !sleep case - 
> which shouldnt be that common in this workload.

Hack was only to kill the stale zero.  Let's forget hack ;-)

	-Mike

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ