lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20070418095334.GA26525@elte.hu>
Date:	Wed, 18 Apr 2007 11:53:34 +0200
From:	Ingo Molnar <mingo@...e.hu>
To:	Nick Piggin <npiggin@...e.de>
Cc:	Andy Whitcroft <apw@...dowen.org>, linux-kernel@...r.kernel.org,
	Linus Torvalds <torvalds@...ux-foundation.org>,
	Andrew Morton <akpm@...ux-foundation.org>,
	Con Kolivas <kernel@...ivas.org>,
	Mike Galbraith <efault@....de>,
	Arjan van de Ven <arjan@...radead.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Steve Fox <drfickle@...ibm.com>,
	Nishanth Aravamudan <nacc@...ibm.com>
Subject: Re: [Announce] [patch] Modular Scheduler Core and Completely Fair Scheduler [CFS]


* Nick Piggin <npiggin@...e.de> wrote:

> > > 535.43user 30.62system 2:23.72elapsed 393%CPU
> > 
> > Thanks for testing this! Could you please try this also with:
> > 
> >    echo 100000000 > /proc/sys/kernel/sched_granularity
> 
> 507.68user 31.87system 2:18.05elapsed 390%CPU
> 507.99user 31.93system 2:18.09elapsed 390%CPU

> > could you maybe even try a more extreme setting of:
> > 
> >    echo 500000000 > /proc/sys/kernel/sched_granularity

> 506.69user 31.96system 2:17.82elapsed 390%CPU
> 505.70user 31.84system 2:17.90elapsed 389%CPU

> Again, for comparison 2.6.21-rc7 mainline:
> 
> 508.87user 32.47system 2:17.82elapsed 392%CPU
> 509.05user 32.25system 2:17.84elapsed 392%CPU

thanks for testing this!

> So looking at elapsed time, a granularity of 100ms is just behind the 
> mainline score. However it is using slightly less user time and 
> slightly more idle time, which indicates that balancing might have got 
> a bit less aggressive.
> 
> But anyway, it conclusively shows the efficiency impact of such tiny 
> timeslices.

yeah, the 4% drop in a CPU-cache-sensitive workload like kernbench is 
not unexpected when going to really frequent preemption. Clearly, the 
default preemption granularity needs to be tuned up.

I think you said you measured ~3msec average preemption rate per CPU? 
That would suggest the average cache-trashing cost was 120 usecs per 
every 3 msec window. Taking that as a ballpark figure, to get the 
difference back into the noise range we'd have to either use ~5 msec:

    echo 5000000 > /proc/sys/kernel/sched_granularity

or 15 msec:

    echo 15000000 > /proc/sys/kernel/sched_granularity

(depending on whether it's 5x 3msec or 5x 1msec - i'm still not sure i 
correctly understood your 3msec value. I'd have to know your kernbench 
workload's approximate 'steady state' context-switch rate to do a more 
accurate calculation.)

	Ingo
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ