lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 Apr 2007 06:06:00 +0200
From:	Nick Piggin <npiggin@...e.de>
To:	Ingo Molnar <mingo@...e.hu>
Cc:	linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Con Kolivas <kernel@...ivas.org>,
	Mike Galbraith <efault@....de>,
	Arjan van de Ven <arjan@...radead.org>,
	Peter Williams <pwil3058@...pond.net.au>,
	Thomas Gleixner <tglx@...utronix.de>, caglar@...dus.org.tr,
	Willy Tarreau <w@....eu>,
	Gene Heskett <gene.heskett@...il.com>, Mark Lord <lkml@....ca>,
	Ulrich Drepper <drepper@...hat.com>
Subject: Re: [patch] CFS scheduler, -v5

On Mon, Apr 23, 2007 at 05:43:10AM +0200, Ingo Molnar wrote:
> 
> * Nick Piggin <npiggin@...e.de> wrote:
> 
> > > note that CFS's "granularity" value is not directly comparable to 
> > > "timeslice length":
> > 
> > Right, but it does introduce the kbuild regression, [...]
> 
> Note that i increased the granularity from 1msec to 5msecs after your 
> kbuild report, could you perhaps retest kbuild with the default settings 
> of -v5?

I'm looking at mysql again today, but I will try eventually. It was
just a simple kbuild.


> > [...] and as we discussed, this will be only worse on newer CPUs with 
> > bigger caches or less naturally context switchy workloads.
> 
> yeah - but they'll all be quad core, so the SMP timeslice multiplicator 
> should do the trick. Most of the CFS testers use single-CPU systems.

But desktop users could have have quad thread and even 8 thread CPUs
soon, so if the number doesn't work for both then you're in trouble.
It just smells like a hack to scale with CPU numbers.

 
> > > (in -v6 i'll scale the granularity up a bit with the number of CPUs, 
> > > like SD does. That should get the right result on larger SMP boxes 
> > > too.)
> > 
> > I don't really like the scaling with SMP thing. The cache effects are 
> > still going to be significant on small systems, and there are lots of 
> > non-desktop users of those (eg. clusters).
> 
> CFS using clusters will want to tune the granularity up drastically 
> anyway, to 1 second or more, to maximize throughput. I think a small 
> default with a scale-up-on-SMP rule is pretty sane. We'll gather some 
> more kbuild data and see what happens, ok?
> 
> > > while i agree it's a tad too finegrained still, I agree with Con's 
> > > choice: rather err on the side of being too finegrained and lose 
> > > some small amount of throughput on cache-intense workloads like 
> > > compile jobs, than err on the side of being visibly too choppy for 
> > > users on the desktop.
> > 
> > So cfs gets too choppy if you make the effective timeslice comparable 
> > to mainline?
> 
> it doesnt in any test i do, but again, i'm erring on the side of it 
> being more interactive.

I'd start by erring on the side of trying to ensure no obvious
performance regressions like this because that's the easy part. Suppose
everybody finds your scheduler wonderfully interactive, but you can't
make it so with a larger timeslice?

For _real_ desktop systems, sure, erring on the side of being more
interactive is fine. For RFC patches for testing, I really think you
could be taking advantage of the fact that people will give you feedback
on the issue.


-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists