lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 12 Aug 2014 09:29:19 +0530 From: Preeti U Murthy <preeti@...ux.vnet.ibm.com> To: Peter Zijlstra <peterz@...radead.org>, Fengguang Wu <fengguang.wu@...el.com> CC: Vincent Guittot <vincent.guittot@...aro.org>, Dave Hansen <dave.hansen@...el.com>, LKML <linux-kernel@...r.kernel.org>, lkp@...org, Ingo Molnar <mingo@...nel.org>, Dietmar Eggemann <dietmar.eggemann@....com> Subject: Re: [sched] 143e1e28cb4: +17.9% aim7.jobs-per-min, -9.7% hackbench.throughput On 08/11/2014 07:03 PM, Peter Zijlstra wrote: > > Now I think I see why this is; we've reduced load balancing frequency > significantly on this machine due to: We have also changed the value of busy_factor to 32 from 64 across all domains. This would contribute to increased frequency of load balancing? Regards Preeti U Murthy > > > -#define SD_SIBLING_INIT (struct sched_domain) { \ > - .min_interval = 1, \ > - .max_interval = 2, \ > > > -#define SD_MC_INIT (struct sched_domain) { \ > - .min_interval = 1, \ > - .max_interval = 4, \ > > > -#define SD_CPU_INIT (struct sched_domain) { \ > - .min_interval = 1, \ > - .max_interval = 4, \ > > > *sd = (struct sched_domain){ > .min_interval = sd_weight, > .max_interval = 2*sd_weight, > > Which both increased the min and max value significantly for all domains > involved. > > That said; I think we might want to do something like the below; I can > imagine decreasing load balancing too much will negatively impact other > workloads. > > Maybe slightly modified to make sure the first domain has a min_interval > of 1. > > --- > kernel/sched/core.c | 6 +++--- > 1 file changed, 3 insertions(+), 3 deletions(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index 1211575a2208..67ed5d854da1 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6049,8 +6049,8 @@ sd_init(struct sched_domain_topology_level *tl, int cpu) > sd_flags &= ~TOPOLOGY_SD_FLAGS; > > *sd = (struct sched_domain){ > - .min_interval = sd_weight, > - .max_interval = 2*sd_weight, > + .min_interval = max(1, sd_weight/2), > + .max_interval = sd_weight, > .busy_factor = 32, > .imbalance_pct = 125, > > @@ -6076,7 +6076,7 @@ sd_init(struct sched_domain_topology_level *tl, int cpu) > , > > .last_balance = jiffies, > - .balance_interval = sd_weight, > + .balance_interval = max(1, sd_weight/2), > .smt_gain = 0, > .max_newidle_lb_cost = 0, > .next_decay_max_lb_cost = jiffies, > -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to majordomo@...r.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists