[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26jzwc9z0s.fsf@google.com>
Date: Fri, 09 Jun 2023 13:11:31 -0700
From: Benjamin Segall <bsegall@...gle.com>
To: Hao Jia <jiahao.os@...edance.com>
Cc: mingo@...hat.com, peterz@...radead.org, mingo@...nel.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
dietmar.eggemann@....com, rostedt@...dmis.org, mgorman@...e.de,
bristot@...hat.com, vschneid@...hat.com,
mgorman@...hsingularity.net, linux-kernel@...r.kernel.org
Subject: Re: [External] Re: [PATCH v4 4/4] sched/core: Avoid multiple
calling update_rq_clock() in unthrottle_offline_cfs_rqs()
Hao Jia <jiahao.os@...edance.com> writes:
> On 2023/6/9 Benjamin Segall wrote:
>> Hao Jia <jiahao.os@...edance.com> writes:
>>
>>> This WARN_DOUBLE_CLOCK warning is triggered during cpu offline.
>>> ------------[ cut here ]------------
>>> rq->clock_update_flags & RQCF_UPDATED
>>> WARNING: CPU: 0 PID: 3323 at kernel/sched/core.c:741
>>> update_rq_clock+0xaf/0x180
>>> Call Trace:
>>> <TASK>
>>> unthrottle_cfs_rq+0x4b/0x300
>>> rq_offline_fair+0x89/0x90
>>> set_rq_offline.part.118+0x28/0x60
>>> rq_attach_root+0xc4/0xd0
>>> cpu_attach_domain+0x3dc/0x7f0
>>> partition_sched_domains_locked+0x2a5/0x3c0
>>> rebuild_sched_domains_locked+0x477/0x830
>>> rebuild_sched_domains+0x1b/0x30
>>> cpuset_hotplug_workfn+0x2ca/0xc90
>>> ? balance_push+0x56/0xf0
>>> ? _raw_spin_unlock+0x15/0x30
>>> ? finish_task_switch+0x98/0x2f0
>>> ? __switch_to+0x291/0x410
>>> ? __schedule+0x65e/0x1310
>>> process_one_work+0x1bc/0x3d0
>>> worker_thread+0x4c/0x380
>>> ? preempt_count_add+0x92/0xa0
>>> ? rescuer_thread+0x310/0x310
>>> kthread+0xe6/0x110
>>> ? kthread_complete_and_exit+0x20/0x20
>>> ret_from_fork+0x1f/0x30
>>>
>>> The rq clock has been updated before the set_rq_offline()
>>> function runs, so we don't need to call update_rq_clock() in
>>> unthrottle_offline_cfs_rqs().
>> I don't think we do in the path from rq_attach_root (though that's easy
>> enough to fix, of course).
>>
>
> Thanks for your review.
>
> Now our fix method is that after applying patch1, we update the rq clock before
> set_rq_offline(). Then use rq_clock_{start, stop}_loop_update to avoid updating
> the rq clock multiple times in unthrottle_cfs_rq().
>
> Do you have any better suggestions?
>
> Thanks,
> Hao
Yeah, the obvious fixes are to either add an update_rq_clock in
rq_attach_root as you suggest, or put it in set_rq_offline instead of
the callers.
Powered by blists - more mailing lists