lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20160428094548.GA23387@gmail.com>
Date:	Thu, 28 Apr 2016 11:45:48 +0200
From:	Ingo Molnar <mingo@...nel.org>
To:	Peter Zijlstra <peterz@...radead.org>
Cc:	Yuyang Du <yuyang.du@...el.com>, linux-kernel@...r.kernel.org,
	bsegall@...gle.com, pjt@...gle.com, morten.rasmussen@....com,
	vincent.guittot@...aro.org, dietmar.eggemann@....com,
	lizefan@...wei.com, umgwanakikbuti@...il.com
Subject: Re: [PATCH v3 6/6] sched/fair: Move (inactive) option from code to
 config


* Peter Zijlstra <peterz@...radead.org> wrote:

> On Tue, Apr 05, 2016 at 12:12:31PM +0800, Yuyang Du wrote:
> > The option of increased load resolution (fixed point arithmetic range) is
> > unconditionally deactivated with #if 0. But since it may still be used
> > somewhere (e.g., in Google), we want to keep this option.
> > 
> > Regardless, there should be a way to express this option. Considering the
> > current circumstances, the reconciliation is we define a config
> > CONFIG_CFS_INCREASE_LOAD_RANGE and it depends on FAIR_GROUP_SCHED and
> > 64BIT and BROKEN.
> > 
> > Suggested-by: Ingo Molnar <mingo@...nel.org>
> 
> So I'm very tempted to simply, unconditionally, reinstate this larger
> range for everything CONFIG_64BIT && CONFIG_FAIR_GROUP_SCHED.
> 
> There was but the single claim on increased power usage, nobody could
> reproduce / analyze and Google has been running with this for years now.
> 
> Furthermore, it seems to be leading to the obvious problems on bigger
> machines where we basically run out of precision by the sheer number of
> cpus (nr_cpus ~ SCHED_LOAD_SCALE and stuff comes apart quickly).

Agreed.

Thanks,

	Ingo

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ