[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <28cacb73-73d7-778a-24ca-9053702f6af7@bytedance.com>
Date: Tue, 2 May 2023 18:22:56 +0800
From: Hao Jia <jiahao.os@...edance.com>
To: Vincent Guittot <vincent.guittot@...aro.org>
Cc: mingo@...hat.com, peterz@...radead.org, mingo@...nel.org,
juri.lelli@...hat.com, dietmar.eggemann@....com,
rostedt@...dmis.org, bsegall@...gle.com, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com,
mgorman@...hsingularity.net, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH 2/2] sched/core: Avoid double calling
update_rq_clock()
On 2023/4/21 Vincent Guittot wrote:
> On Mon, 10 Apr 2023 at 10:12, Hao Jia <jiahao.os@...edance.com> wrote:
>>
>> There are some double rq clock update warnings are triggered.
>> ------------[ cut here ]------------
>> rq->clock_update_flags & RQCF_UPDATED
>> WARNING: CPU: 17 PID: 138 at kernel/sched/core.c:741
>> update_rq_clock+0xaf/0x180
>> Call Trace:
>> <TASK>
>> __balance_push_cpu_stop+0x146/0x180
>> ? migration_cpu_stop+0x2a0/0x2a0
>> cpu_stopper_thread+0xa3/0x140
>> smpboot_thread_fn+0x14f/0x210
>> ? sort_range+0x20/0x20
>> kthread+0xe6/0x110
>> ? kthread_complete_and_exit+0x20/0x20
>> ret_from_fork+0x1f/0x30
>>
>> ------------[ cut here ]------------
>> rq->clock_update_flags & RQCF_UPDATED
>> WARNING: CPU: 54 PID: 0 at kernel/sched/core.c:741
>> update_rq_clock+0xaf/0x180
>> Call Trace:
>> <TASK>
>> unthrottle_cfs_rq+0x4b/0x300
>> __cfsb_csd_unthrottle+0xe0/0x100
>> __flush_smp_call_function_queue+0xaf/0x1d0
>> flush_smp_call_function_queue+0x49/0x90
>> do_idle+0x17c/0x270
>> cpu_startup_entry+0x19/0x20
>> start_secondary+0xfa/0x120
>> secondary_startup_64_no_verify+0xce/0xdb
>>
>> ------------[ cut here ]------------
>> rq->clock_update_flags & RQCF_UPDATED
>> WARNING: CPU: 0 PID: 3323 at kernel/sched/core.c:741
>> update_rq_clock+0xaf/0x180
>> Call Trace:
>> <TASK>
>> unthrottle_cfs_rq+0x4b/0x300
>> rq_offline_fair+0x89/0x90
>> set_rq_offline.part.118+0x28/0x60
>
> So this is generated by patch 1, isn't it ?
Sorry for the late reply, I just got back from a long term.
IIRC, this is not generated by patch1.
In the unthrottle_offline_cfs_rqs() function, we traverse task_groups
through list_for_each_entry_rcu, so unthrottle_cfs_rq() may be called
multiple times, resulting in multiple updates to the rq clock.
Thanks,
Hao
>
>> rq_attach_root+0xc4/0xd0
>> cpu_attach_domain+0x3dc/0x7f0
>> partition_sched_domains_locked+0x2a5/0x3c0
>> rebuild_sched_domains_locked+0x477/0x830
>> rebuild_sched_domains+0x1b/0x30
>> cpuset_hotplug_workfn+0x2ca/0xc90
>> ? balance_push+0x56/0xf0
>> ? _raw_spin_unlock+0x15/0x30
>> ? finish_task_switch+0x98/0x2f0
>> ? __switch_to+0x291/0x410
>> ? __schedule+0x65e/0x1310
>> process_one_work+0x1bc/0x3d0
>> worker_thread+0x4c/0x380
>> ? preempt_count_add+0x92/0xa0
>> ? rescuer_thread+0x310/0x310
>> kthread+0xe6/0x110
>> ? kthread_complete_and_exit+0x20/0x20
>> ret_from_fork+0x1f/0x30
>>
>> For the __balance_push_cpu_stop() case, we remove update_rq_clock() from
>> the __migrate_task() function to avoid double updating the rq clock.
>> And in order to avoid missing rq clock update, add update_rq_clock()
>> call before migration_cpu_stop() calls __migrate_task().
>>
>> This also works for unthrottle_cfs_rq(), so we also removed
>> update_rq_clock() from the unthrottle_cfs_rq() function to avoid
>> warnings caused by calling it multiple times, such as
>> __cfsb_csd_unthrottle() and unthrottle_offline_cfs_rqs(). and
>> in order to avoid missing rq clock update, we correspondingly add
>> update_rq_clock() calls before unthrottle_cfs_rq() runs.
>>
>> Note that the rq clock has been updated before the set_rq_offline()
>> function runs, so we don't need to add update_rq_clock() call in
>> unthrottle_offline_cfs_rqs().
>>
Powered by blists - more mailing lists