lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20150714020721.GA3956@byungchulpark-X58A-UD3R>
Date:	Tue, 14 Jul 2015 11:07:21 +0900
From:	Byungchul Park <byungchul.park@....com>
To:	Mike Galbraith <umgwanakikbuti@...il.com>
Cc:	Morten Rasmussen <morten.rasmussen@....com>, mingo@...nel.org,
	peterz@...radead.org, linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] sched: let __sched_period() use rq's nr_running

On Mon, Jul 13, 2015 at 02:30:38PM +0200, Mike Galbraith wrote:
> On Mon, 2015-07-13 at 20:07 +0900, Byungchul Park wrote:
> 
> > i still think stretching with local cfs's nr_running should be replaced with
> > stretching with a top(=root) level one.
> 
> I think we just can't take 'slice' _too_ seriously.  Not only is it

hello mike, :)

as you said, it is not too much important thing which has to be taken
too seriously, since it would be adjusted by vruntime in cfs.

but.. is there any reason meaningless code should be kept in source? :(
it also harms readability. of cource, i need to modify my patch a little 
bit to prevent non-group sched entities from getting large slice.

thank you,
byungchul

> annoying with cgroups, the scheduler simply doesn't deliver 'slices' in
> the traditional sense, it equalizes vruntimes, planning to do that at
> slice granularity.  FAIR_SLEEPERS doesn't make that planning any easier.
> With a pure compute load and no HR_TICK, what you get is tick
> granularity preemption checkpoints, but having just chewed up a 'slice'
> means nothing if you're still leftmost.  It's all about vruntime, so
> leftmost can have back to back 'slices'.  FAIR_SLEEPERS just increases
> the odds that leftmost WILL take more than one 'slice'.
> 
> (we could perhaps decay deficit after a full slice or such to decrease
> the spread growth that sleepers induce. annoying problem, especially so
> with a gaggle of identical sleepers, as sleep time becomes meaningless,
> there is no differential to equalize.. other than the ones we create..
> but I'm digressing, a lot, time to stop thinking/typing, go do work;)
> 
> 	-Mike
> 
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@...r.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ