[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20200829074719.GJ1362448@hirez.programming.kicks-ass.net>
Date: Sat, 29 Aug 2020 09:47:19 +0200
From: peterz@...radead.org
To: Vineeth Pillai <viremana@...ux.microsoft.com>
Cc: Julien Desfossez <jdesfossez@...italocean.com>,
Joel Fernandes <joelaf@...gle.com>,
Tim Chen <tim.c.chen@...ux.intel.com>,
Aaron Lu <aaron.lwe@...il.com>,
Aubrey Li <aubrey.intel@...il.com>,
Dhaval Giani <dhaval.giani@...cle.com>,
Chris Hyser <chris.hyser@...cle.com>,
Nishanth Aravamudan <naravamudan@...italocean.com>,
mingo@...nel.org, tglx@...utronix.de, pjt@...gle.com,
torvalds@...ux-foundation.org, linux-kernel@...r.kernel.org,
fweisbec@...il.com, keescook@...omium.org, kerrnel@...gle.com,
Phil Auld <pauld@...hat.com>,
Valentin Schneider <valentin.schneider@....com>,
Mel Gorman <mgorman@...hsingularity.net>,
Pawan Gupta <pawan.kumar.gupta@...ux.intel.com>,
Paolo Bonzini <pbonzini@...hat.com>, joel@...lfernandes.org,
vineeth@...byteword.org, Chen Yu <yu.c.chen@...el.com>,
Christian Brauner <christian.brauner@...ntu.com>,
Agata Gruza <agata.gruza@...el.com>,
Antonio Gomez Iglesias <antonio.gomez.iglesias@...el.com>,
graf@...zon.com, konrad.wilk@...cle.com, dfaggioli@...e.com,
rostedt@...dmis.org, derkling@...gle.com, benbjiang@...cent.com,
Vineeth Remanan Pillai <vpillai@...italocean.com>,
Aaron Lu <aaron.lu@...ux.alibaba.com>
Subject: Re: [RFC PATCH v7 08/23] sched: Add core wide task selection and
scheduling.
On Fri, Aug 28, 2020 at 06:02:25PM -0400, Vineeth Pillai wrote:
> On 8/28/20 4:51 PM, Peter Zijlstra wrote:
> > So where do things go side-ways?
> During hotplug stress test, we have noticed that while a sibling is in
> pick_next_task, another sibling can go offline or come online. What
> we have observed is smt_mask get updated underneath us even if
> we hold the lock. From reading the code, looks like we don't hold the
> rq lock when the mask is updated. This extra logic was to take care of that.
Sure, the mask is updated async, but _where_ is the actual problem with
that?
On Fri, Aug 28, 2020 at 06:23:55PM -0400, Joel Fernandes wrote:
> Thanks Vineeth. Peter, also the "v6+" series (which were some addons on v6)
> detail the individual hotplug changes squashed into this patch:
> https://lore.kernel.org/lkml/20200815031908.1015049-9-joel@joelfernandes.org/
> https://lore.kernel.org/lkml/20200815031908.1015049-11-joel@joelfernandes.org/
That one looks fishy, the pick is core wide, making that pick_seq per rq
just doesn't make sense.
> https://lore.kernel.org/lkml/20200815031908.1015049-12-joel@joelfernandes.org/
This one reads like tinkering, there is no description of the actual
problem just some code that makes a symptom go away.
Sure, on hotplug the smt mask can change, but only for a CPU that isn't
actually scheduling, so who cares.
/me re-reads the hotplug code...
..ooOO is the problem that we clear the cpumasks on take_cpu_down()
instead of play_dead() ?! That should be fixable.
> https://lore.kernel.org/lkml/20200815031908.1015049-13-joel@joelfernandes.org/
This is the only one that makes some sense, it makes rq->core consistent
over hotplug.
> Agreed we can split the patches for the next series, however for final
> upstream merge, I suggest we fix hotplug issues in this patch itself so that
> we don't break bisectability.
Meh, who sodding cares about hotplug :-). Also you can 'fix' such things
by making sure you can't actually enable core-sched until after
everything is in place.
Powered by blists - more mailing lists