[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <jhj5ze9t0er.mognet@arm.com>
Date: Thu, 09 Apr 2020 11:32:12 +0100
From: Valentin Schneider <valentin.schneider@....com>
To: Sudeep Holla <sudeep.holla@....com>
Cc: Cheng Jian <cj.chengjian@...wei.com>, vpillai@...italocean.com,
aaron.lwe@...il.com, aubrey.intel@...il.com,
aubrey.li@...ux.intel.com, fweisbec@...il.com,
jdesfossez@...italocean.com, joel@...lfernandes.org,
joelaf@...gle.com, keescook@...omium.org, kerrnel@...gle.com,
linux-kernel@...r.kernel.org, mgorman@...hsingularity.net,
mingo@...nel.org, naravamudan@...italocean.com, pauld@...hat.com,
pawan.kumar.gupta@...ux.intel.com, pbonzini@...hat.com,
peterz@...radead.org, pjt@...gle.com, tglx@...utronix.de,
tim.c.chen@...ux.intel.com, torvalds@...ux-foundation.org,
xiexiuqi@...wei.com, huawei.libin@...wei.com, w.f@...wei.com,
linux-arm-kernel@...ts.infradead.org
Subject: Re: [PATCH] sched/arm64: store cpu topology before notify_cpu_starting
On 09/04/20 10:59, Sudeep Holla wrote:
> On Wed, Apr 01, 2020 at 02:23:33PM +0100, Valentin Schneider wrote:
>>
>> (+LAKML, +Sudeep)
>>
>
> Thanks Valentin.
>
>> On Wed, Apr 01 2020, Cheng Jian wrote:
>> > when SCHED_CORE enabled, sched_cpu_starting() uses thread_sibling as
>> > SMT_MASK to initialize rq->core, but only after store_cpu_topology(),
>> > the thread_sibling is ready for use.
>> >
>> > notify_cpu_starting()
>> > -> sched_cpu_starting() # use thread_sibling
>> >
>> > store_cpu_topology(cpu)
>> > -> update_siblings_masks # set thread_sibling
>> >
>> > Fix this by doing notify_cpu_starting later, just like x86 do.
>> >
>>
>> I haven't been following the sched core stuff closely; can't this
>> rq->core assignment be done in sched_cpu_activate() instead? We already
>> look at the cpu_smt_mask() in there, and it is valid (we go through the
>> entirety of secondary_start_kernel() before getting anywhere near
>> CPUHP_AP_ACTIVE).
>>
>
> I too came to same conclusion. Did you see any issues ? Or is it
> just code inspection in parity with x86 ?
>
With mainline this isn't a problem; with the core scheduling stuff there is
an expectation that we can use the SMT masks in sched_cpu_starting().
>> I don't think this breaks anything, but without this dependency in
>> sched_cpu_starting() then there isn't really a reason for this move.
>>
>
> Based on the commit message, had a quick look at x86 code and I agree
> this shouldn't break anything. However the commit message does make
> complete sense to me, especially reference to sched_cpu_starting
> while smt_masks are accessed in sched_cpu_activate. Or am I missing
> to understand something here ?
As stated above, it's not a problem for mainline, and AIUI we can change
the core scheduling bits to only use the SMT mask in sched_cpu_activate()
instead, therefore not requiring any change in the arch code.
I'm not aware of any written rule that the topology masks should be usable
from a given hotplug state upwards, only that right now we need them in
sched_cpu_(de)activate() for SMT scheduling - and that is already working
fine.
So really this should be considering as a simple neutral cleanup; I don't
really have any opinion on picking it up or not.
Powered by blists - more mailing lists