lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <20070713090151.13554f72@frecb000686.frec.bull.fr>
Date:	Fri, 13 Jul 2007 09:01:51 +0200
From:	Sébastien Dugué <sebastien.dugue@...l.net>
To:	Josh Triplett <josht@...ux.vnet.ibm.com>
Cc:	Ingo Molnar <mingo@...e.hu>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Linux RT Users <linux-rt-users@...r.kernel.org>,
	Darren Hart <dvhltc@...ibm.com>,
	john stultz <johnstul@...ibm.com>,
	Jean Pierre Dion <jean-pierre.dion@...l.net>,
	Gilles Carry <Gilles.Carry@....bull.net>,
	Steven Rostedt <rostedt@...dmis.org>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [Patch RT] Fix CFS load balancing for RT tasks


 Hi Josh,

On Thu, 12 Jul 2007 10:13:57 -0700 Josh Triplett <josht@...ux.vnet.ibm.com> wrote:

> On Thu, 2007-07-12 at 09:41 -0700, Josh Triplett wrote:
> > On Wed, 2007-07-11 at 16:47 +0200, Sébastien Dugué wrote:
> > >   there seems to be something wrong with the way the CFS balances (or does not
> > > balance) RT tasks. This was evidenced using the sched_football testcase
> > > available from the RT wiki (http://rt.wiki.kernel.org/index.php/IBM_Test_Cases)
> > > which I modified and attached to this mail.
> > > 
> > >   The testcase starts a number of threads which fall into 3 categories:
> > > 
> > > 	1 referee thread: SCHED_FIFO, RT prio 5
> > > 	ncpus defensive threads: SCHED_FIFO, RT prio 4
> > > 	ncpus offensive threads: SCHED_FIFO, RT prio 3
> > > 
> > > 	(ncpus being the number of CPUs)
> > > 
> > >   To make a long story short, the defensive threads should end up distributed
> > > among all CPUs, but that's not the case. For example, on a dual HT Xeon box,
> > > after task migration stabilizes we have the following running on the different
> > > CPUs:
> > > 
> > >   CPU 0: defense2
> > >   CPU 1: referee offense2 offense3 offense4 defense3
> > >   CPU 2: offense1
> > >   CPU 3: defense1 defense4
> > > 
> > > which clearly show the imbalance between CPU 2 and CPU 3 where offense1
> > > should not be allowed to run while the higher prio defense1 and defense4
> > > are sharing the same CPU.
> > > 
> > >   The following patch fixes this by re-enabling the RT overload detection
> > > for the CFS. It may not be the right solution, maybe it should be incorporated
> > > into the other load balancing mechanisms. I did not digg deep enough yet
> > > to make that call ;-)
> > 
> > 2.6.21.5-rt20 plus this patch passed 1000 runs of the standard
> > sched_football on an 8 processor (quad dual-core) x86-64 box.  Nice
> > work.
> 
> Hmmm, seems I spoke a bit too soon; due to a bug in the test log
> checker, the test failed but the log checker said PASS.  Actual results:
> $ grep 'Final ball position' rt-tests.log | sort | uniq -c
>     960 Final ball position: 0
>      39 Final ball position: 1
>       1 Final ball position: 2
> 
> So it failed 4% of the runs.  However, it failed much less
> spectacularly; rather than overrunning the integer maximum, it only
> reached 1-2.  Still a huge improvement despite not solving the problem
> completely.


  Yeah, I had a 5000 iterations test run last night which failed with
one of the runs having a final ball position of 1. So it's not yet
perfect. I think we're hitting the same bug as with the O(1) scheduler
in 2.6.21-rt8.

  It seems that the more CPUs there are, the easier it is to reproduce.
Hopefully I should have access to a quad dual core box soon now to
verify this.

  One more thing, the bug seems even harder to reproduce if I have
a logdev LD_MARK (direct marker, no kprobe) in context_switch().

  Ingo, Thomas, any ideas, opinions? It looks like there might be a race in
the RT balancing code that prevents pulling the highest prio task and
instead selects a lower one to run.

  In the mean time I'll continue poring over the 2.6.21-rt8 scheduler
to try to find something.


  Sébastien.
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ