lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Tue, 25 Aug 2020 17:27:42 +0800 From: xunlei <xlpang@...ux.alibaba.com> To: Jiang Biao <benbjiang@...il.com> Cc: Ingo Molnar <mingo@...hat.com>, Peter Zijlstra <peterz@...radead.org>, Vincent Guittot <vincent.guittot@...aro.org>, Juri Lelli <juri.lelli@...hat.com>, Wetp Zhang <wetp.zy@...ux.alibaba.com>, linux-kernel <linux-kernel@...r.kernel.org> Subject: Re: [PATCH] sched/fair: Fix wrong cpu selecting from isolated domain On 2020/8/25 下午2:37, Jiang Biao wrote: > On Mon, 24 Aug 2020 at 20:31, Xunlei Pang <xlpang@...ux.alibaba.com> wrote: >> >> We've met problems that occasionally tasks with full cpumask >> (e.g. by putting it into a cpuset or setting to full affinity) >> were migrated to our isolated cpus in production environment. >> >> After some analysis, we found that it is due to the current >> select_idle_smt() not considering the sched_domain mask. >> >> Fix it by checking the valid domain mask in select_idle_smt(). >> >> Fixes: 10e2f1acd010 ("sched/core: Rewrite and improve select_idle_siblings()) >> Reported-by: Wetp Zhang <wetp.zy@...ux.alibaba.com> >> Signed-off-by: Xunlei Pang <xlpang@...ux.alibaba.com> >> --- >> kernel/sched/fair.c | 9 +++++---- >> 1 file changed, 5 insertions(+), 4 deletions(-) >> >> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c >> index 1a68a05..fa942c4 100644 >> --- a/kernel/sched/fair.c >> +++ b/kernel/sched/fair.c >> @@ -6075,7 +6075,7 @@ static int select_idle_core(struct task_struct *p, struct sched_domain *sd, int >> /* >> * Scan the local SMT mask for idle CPUs. >> */ >> -static int select_idle_smt(struct task_struct *p, int target) >> +static int select_idle_smt(struct task_struct *p, struct sched_domain *sd, int target) >> { >> int cpu; >> >> @@ -6083,7 +6083,8 @@ static int select_idle_smt(struct task_struct *p, int target) >> return -1; >> >> for_each_cpu(cpu, cpu_smt_mask(target)) { >> - if (!cpumask_test_cpu(cpu, p->cpus_ptr)) >> + if (!cpumask_test_cpu(cpu, p->cpus_ptr) || >> + !cpumask_test_cpu(cpu, sched_domain_span(sd))) > Maybe the following change could be better, :) > for_each_cpu_and(cpu, cpu_smt_mask(target), sched_domain_span(sd)) > keep a similar style with select_idle_core/cpu, and could reduce loops. > I thought that, but given that smt mask is usually small, the original code may run a bit faster? > Just an option. > Reviewed-by: Jiang Biao <benbjiang@...cent.com> > Thanks :-)
Powered by blists - more mailing lists