lists.openwall.net | lists / announce owl-users owl-dev john-users john-dev passwdqc-users yescrypt popa3d-users / oss-security kernel-hardening musl sabotage tlsify passwords / crypt-dev xvendor / Bugtraq Full-Disclosure linux-kernel linux-netdev linux-ext4 linux-hardening linux-cve-announce PHC | |
Open Source and information security mailing list archives
| ||
|
Date: Fri, 28 Aug 2020 22:51:54 +0200 From: Peter Zijlstra <peterz@...radead.org> To: Julien Desfossez <jdesfossez@...italocean.com> Cc: Vineeth Pillai <viremana@...ux.microsoft.com>, Joel Fernandes <joelaf@...gle.com>, Tim Chen <tim.c.chen@...ux.intel.com>, Aaron Lu <aaron.lwe@...il.com>, Aubrey Li <aubrey.intel@...il.com>, Dhaval Giani <dhaval.giani@...cle.com>, Chris Hyser <chris.hyser@...cle.com>, Nishanth Aravamudan <naravamudan@...italocean.com>, mingo@...nel.org, tglx@...utronix.de, pjt@...gle.com, torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org, fweisbec@...il.com, keescook@...omium.org, kerrnel@...gle.com, Phil Auld <pauld@...hat.com>, Valentin Schneider <valentin.schneider@....com>, Mel Gorman <mgorman@...hsingularity.net>, Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>, Paolo Bonzini <pbonzini@...hat.com>, joel@...lfernandes.org, vineeth@...byteword.org, Chen Yu <yu.c.chen@...el.com>, Christian Brauner <christian.brauner@...ntu.com>, Agata Gruza <agata.gruza@...el.com>, Antonio Gomez Iglesias <antonio.gomez.iglesias@...el.com>, graf@...zon.com, konrad.wilk@...cle.com, dfaggioli@...e.com, rostedt@...dmis.org, derkling@...gle.com, benbjiang@...cent.com, Vineeth Remanan Pillai <vpillai@...italocean.com>, Aaron Lu <aaron.lu@...ux.alibaba.com> Subject: Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and scheduling. On Fri, Aug 28, 2020 at 03:51:09PM -0400, Julien Desfossez wrote: > + smt_weight = cpumask_weight(smt_mask); > + for_each_cpu_wrap_or(i, smt_mask, cpumask_of(cpu), cpu) { > + struct rq *rq_i = cpu_rq(i); > + struct task_struct *p; > + > + /* > + * During hotplug online a sibling can be added in > + * the smt_mask * while we are here. If so, we would > + * need to restart selection by resetting all over. > + */ > + if (unlikely(smt_weight != cpumask_weight(smt_mask))) > + goto retry_select; cpumask_weigt() is fairly expensive, esp. for something that should 'never' happen. What exactly is the race here? We'll update the cpu_smt_mask() fairly early in secondary bringup, but where does it become a problem? The moment the new thread starts scheduling it'll block on the common rq->lock and then it'll cycle task_seq and do a new pick. So where do things go side-ways? Can we please split out this hotplug 'fix' into a separate patch with a coherent changelog.
Powered by blists - more mailing lists