lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-Id: <1175804976.7814.109.camel@Homer.simpson.net>
Date:	Thu, 05 Apr 2007 22:29:36 +0200
From:	Mike Galbraith <efault@....de>
To:	Con Kolivas <kernel@...ivas.org>
Cc:	Ingo Molnar <mingo@...e.hu>,
	linux list <linux-kernel@...r.kernel.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	ck list <ck@....kolivas.org>
Subject: Re: [test] sched: SD-latest versus Mike's latest

On Fri, 2007-04-06 at 02:08 +1000, Con Kolivas wrote:
> On Thursday 05 April 2007 21:54, Ingo Molnar wrote:
> > * Mike Galbraith <efault@....de> wrote:
> > > On Tue, 2007-04-03 at 08:01 +0200, Ingo Molnar wrote:
> > > > looks interesting - could you send the patch?
> > >
> > > Ok, this is looking/feeling pretty good in testing.  Comments on
> > > fugliness etc much appreciated.
> > >
> > > Below the numbers is a snapshot of my experimental tree.  It's a
> > > mixture of my old throttling/anti-starvation tree and the task
> 
> Throttling to try to get to SD fairness? The mainline state machine becomes 
> more complex than ever and fluctuates from interactive to fair by an as-yet 
> unchosen magic number timeframe which ebbs and flows.

I believe I've already met and surpassed SD fairness.  Bold statement,
but I believe it's true.  I'm more worried about becoming _too_ fair.

Show me your numbers.  I showed you mine with both SD and my patches.

WRT magic and state machine complexity:  If you read the patch, there is
nothing "magical" about it.  It doesn't do anything but monitor CPU
usage and move a marker.  It does nothing the least bit complicated, and
what it does, it does in the slow path.  The only thing it does in the
fast path is to move the marker, and perhaps tag a targeted task.  State
machine?  There is nothing there that resembles a state machine to me,
the heuristic is just add sleep time, burn on use.

> > > promotion patch, with the addition of a scheduling class for
> > > interactive tasks to dish out some of that targeted unfairness I
> > > mentioned.
> 
> Nice -10 on mainline ruins the latency of nice 0 tasks unlike SD. New 
> scheduling class just for X? Sounds like a very complicated 
> userspace-changing way to just do the equivalent of "nice -n -10" obfuscated.

This patch makes massive nice -10 vs nice 0 latency history I believe.
Testing welcome.  WRT "nice -10 obfuscated", that's a load of high grade
horse-hockey.  There were very good reason posted here as to why that is
a very bad idea, perhaps you haven't read them.  (you can find them if
you choose)

Your criticism SCHED_INTERACTIVE leaves me dumbfounded, since you were,
and still are, specifically telling me that I should tell the scheduler
that X is special.  I did precisely that, and am also trying to tell it
that it's clients are special too, _without_ having to start each and
every client at nice -10 or whatever static number of the day.

> > here's some test results, comparing SD-latest to Mike's-latest:
> >
> > re-testing the weak points of the vanilla scheduler + Mike's:
> >
> >  - thud.c:    this workload has almost unnoticeable effect
> >  - fiftyp.c:  noticeable, but alot better than previously!
> 
> Load of 1.5 makes mainline a doorstop without throttling.

Where does that come from?  Doesn't jibe with my experience at all.

> > re-testing the weak points of SD:
> >
> >  - hackbench: still unusable under such type of high load - no improvement.
> 
> Load of 160. Is proportional slowdown bad?
> 
> >  - make -j:   still less interactive than Mike's - no improvement.
> 
> Depends on how big your job number vs cpu is. The better the throttling gets 
> with mainline the better SD gets in this comparison. At equal fairness 
> mainline does not have the low latency interactivity SD has.

So we should do 8ms slices too?  I don't think that's necessary.

> Nice -10 X with SD is a far better solution than an ever increasing complexity 
> state machine and a userspace-changing scheduling policy just for X. Half 
> decent graphics cards get good interactivity with SD even without renicing.

SD does not retain interactivity under any appreciable load for one, and
secondly, I'm getting interactivity that SD cannot even get close to
without renicing, and without any patches - in mainline right now.

(Speaking of low latency, how long can tasks forking off sleepers who
overlap their wake times prevent an array switch with SD?  Forever?)

I posted numbers that demonstrate the improvement in fairness while
maintaining interactivity, and I'm not finished.  I've solved the
multiple fiftyp.c thing Ingo noticed, and in fact, I had 10 copies
running that I had forgotten to terminate while I was working, and I
didn't even notice until I finished, and saw my top window.  Patch to
follow as soon as I test some more (that's what takes much time, not
creating the diff.  this isn't rocket science.)

Maybe I'll succeed, maybe I won't.

	-Mike

-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ