lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Wed, 23 Dec 2020 14:31:08 +0100 From: Vincent Guittot <vincent.guittot@...aro.org> To: Peter Zijlstra <peterz@...radead.org> Cc: Mel Gorman <mgorman@...hsingularity.net>, linux-kernel <linux-kernel@...r.kernel.org>, Aubrey Li <aubrey.li@...ux.intel.com>, Ingo Molnar <mingo@...hat.com>, Juri Lelli <juri.lelli@...hat.com>, Valentin Schneider <valentin.schneider@....com>, Qais Yousef <qais.yousef@....com>, Dietmar Eggemann <dietmar.eggemann@....com>, Steven Rostedt <rostedt@...dmis.org>, Ben Segall <bsegall@...gle.com>, Tim Chen <tim.c.chen@...ux.intel.com>, Jiang Biao <benbjiang@...il.com> Subject: Re: [RFC][PATCH 2/5] sched/fair: Make select_idle_cpu() proportional to cores On Mon, 14 Dec 2020 at 18:07, Peter Zijlstra <peterz@...radead.org> wrote: > > Instead of calculating how many (logical) CPUs to scan, compute how > many cores to scan. > > This changes behaviour for anything !SMT2. > > Signed-off-by: Peter Zijlstra (Intel) <peterz@...radead.org> > --- > kernel/sched/core.c | 19 ++++++++++++++----- > kernel/sched/fair.c | 12 ++++++++++-- > 2 files changed, 24 insertions(+), 7 deletions(-) > > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -7454,11 +7454,20 @@ int sched_cpu_activate(unsigned int cpu) > balance_push_set(cpu, false); > > #ifdef CONFIG_SCHED_SMT > - /* > - * When going up, increment the number of cores with SMT present. > - */ > - if (cpumask_weight(cpu_smt_mask(cpu)) == 2) > - static_branch_inc_cpuslocked(&sched_smt_present); > + do { > + int weight = cpumask_weight(cpu_smt_mask(cpu)); > + extern int sched_smt_weight; > + > + if (weight > sched_smt_weight) > + sched_smt_weight = weight; > + > + /* > + * When going up, increment the number of cores with SMT present. > + */ > + if (weight == 2) > + static_branch_inc_cpuslocked(&sched_smt_present); > + > + } while (0); > #endif > set_cpu_active(cpu, true); > > --- a/kernel/sched/fair.c > +++ b/kernel/sched/fair.c > @@ -6010,6 +6010,8 @@ static inline int find_idlest_cpu(struct > DEFINE_STATIC_KEY_FALSE(sched_smt_present); > EXPORT_SYMBOL_GPL(sched_smt_present); > > +int sched_smt_weight = 1; > + > static inline void set_idle_cores(int cpu, int val) > { > struct sched_domain_shared *sds; > @@ -6124,6 +6126,8 @@ static int select_idle_smt(struct task_s > > #else /* CONFIG_SCHED_SMT */ > > +#define sched_smt_weight 1 > + > static inline int select_idle_core(struct task_struct *p, struct sched_domain *sd, int target) > { > return -1; > @@ -6136,6 +6140,8 @@ static inline int select_idle_smt(struct > > #endif /* CONFIG_SCHED_SMT */ > > +#define sis_min_cores 2 > + > /* > * Scan the LLC domain for idle CPUs; this is dynamically regulated by > * comparing the average scan cost (tracked in sd->avg_scan_cost) against the > @@ -6166,10 +6172,12 @@ static int select_idle_cpu(struct task_s > avg_cost = this_sd->avg_scan_cost + 1; > > span_avg = sd->span_weight * avg_idle; > - if (span_avg > 4*avg_cost) > + if (span_avg > sis_min_cores * avg_cost) > nr = div_u64(span_avg, avg_cost); > else > - nr = 4; > + nr = sis_min_cores; > + > + nr *= sched_smt_weight; My comment apply of the final result of the patchset where select_idle_cpu/cpore are merged IIUC, sched_smt_weight is the max number of CPUs per core but we already loop all CPUs of a core in one loop so we don't need to loop more than the number of core. > > time = cpu_clock(this); > } > >
Powered by blists - more mailing lists