[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f14f2f88-2b0d-73ef-13b1-c768377b86fd@redhat.com>
Date: Sun, 29 Jan 2023 21:58:29 -0500
From: Waiman Long <longman@...hat.com>
To: Qais Yousef <qyousef@...alina.io>
Cc: Peter Zijlstra <peterz@...radead.org>,
Ingo Molnar <mingo@...nel.org>,
Juri Lelli <juri.lelli@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>, tj@...nel.org,
linux-kernel@...r.kernel.org, luca.abeni@...tannapisa.it,
claudio@...dence.eu.com, tommaso.cucinotta@...tannapisa.it,
bristot@...hat.com, mathieu.poirier@...aro.org,
Dietmar Eggemann <dietmar.eggemann@....com>,
cgroups@...r.kernel.org,
Vincent Guittot <vincent.guittot@...aro.org>,
Wei Wang <wvw@...gle.com>, Rick Yiu <rickyiu@...gle.com>,
Quentin Perret <qperret@...gle.com>
Subject: Re: [PATCH v2] sched: cpuset: Don't rebuild sched domains on
suspend-resume
On 1/29/23 21:49, Waiman Long wrote:
> On 1/25/23 11:35, Qais Yousef wrote:
>> On 01/20/23 17:16, Waiman Long wrote:
>>> On 1/20/23 14:48, Qais Yousef wrote:
>>>> Commit f9a25f776d78 ("cpusets: Rebuild root domain deadline
>>>> accounting information")
>>>> enabled rebuilding sched domain on cpuset and hotplug operations to
>>>> correct deadline accounting.
>>>>
>>>> Rebuilding sched domain is a slow operation and we see 10+ ms delay on
>>>> suspend-resume because of that.
>>>>
>>>> Since nothing is expected to change on suspend-resume operation; skip
>>>> rebuilding the sched domains to regain the time lost.
>>>>
>>>> Debugged-by: Rick Yiu <rickyiu@...gle.com>
>>>> Signed-off-by: Qais Yousef (Google) <qyousef@...alina.io>
>>>> ---
>>>>
>>>> Changes in v2:
>>>> * Remove redundant check in update_tasks_root_domain()
>>>> (Thanks Waiman)
>>>> v1 link:
>>>> https://lore.kernel.org/lkml/20221216233501.gh6m75e7s66dmjgo@airbuntu/
>>>>
>>>> kernel/cgroup/cpuset.c | 3 +++
>>>> kernel/sched/deadline.c | 3 +++
>>>> 2 files changed, 6 insertions(+)
>>>>
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index a29c0b13706b..9a45f083459c 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -1088,6 +1088,9 @@ static void rebuild_root_domains(void)
>>>> lockdep_assert_cpus_held();
>>>> lockdep_assert_held(&sched_domains_mutex);
>>>> + if (cpuhp_tasks_frozen)
>>>> + return;
>>>> +
>>>> rcu_read_lock();
>>>> /*
>>>> diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
>>>> index 0d97d54276cc..42c1143a3956 100644
>>>> --- a/kernel/sched/deadline.c
>>>> +++ b/kernel/sched/deadline.c
>>>> @@ -2575,6 +2575,9 @@ void dl_clear_root_domain(struct root_domain
>>>> *rd)
>>>> {
>>>> unsigned long flags;
>>>> + if (cpuhp_tasks_frozen)
>>>> + return;
>>>> +
>>>> raw_spin_lock_irqsave(&rd->dl_bw.lock, flags);
>>>> rd->dl_bw.total_bw = 0;
>>>> raw_spin_unlock_irqrestore(&rd->dl_bw.lock, flags);
>>> cpuhp_tasks_frozen is set when thaw_secondary_cpus() or
>>> freeze_secondary_cpus() is called. I don't know the exact
>>> suspend/resume
>>> calling sequences, will cpuhp_tasks_frozen be cleared at the end of
>>> resume
>>> sequence? Maybe we should make sure that rebuild_root_domain() is
>>> called at
>>> least once at the end of resume operation.
>> Very good questions. It made me look at the logic again and I realize
>> now that
>> the way force_build behaves is causing this issue.
>>
>> I *think* we should just make the call rebuild_root_domains() only if
>> cpus_updated in cpuset_hotplug_workfn().
>>
>> cpuset_cpu_active() seems to be the source of force_rebuild in my
>> case; which
>> seems to be called only after the last cpu is back online (what you
>> suggest).
>> In this case we can end up with cpus_updated = false, but
>> force_rebuild = true.
>>
>> Now you added a couple of new users to force_rebuild in
>> 4b842da276a8a; I'm
>> trying to figure out what the conditions would be there. It seems we
>> can have
>> corner cases for cpus_update might not trigger correctly?
>>
>> Could the below be a good cure?
>>
>> AFAICT we must rebuild the root domains if something has changed in
>> cpuset.
>> Which should be captured by either having:
>>
>> * cpus_updated = true
>> * force_rebuild && !cpuhp_tasks_frozen
>>
>> /me goes to test the patch
>>
>> --->8---
>>
>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>> index a29c0b13706b..363e4459559f 100644
>> --- a/kernel/cgroup/cpuset.c
>> +++ b/kernel/cgroup/cpuset.c
>> @@ -1079,6 +1079,8 @@ static void update_tasks_root_domain(struct
>> cpuset *cs)
>> css_task_iter_end(&it);
>> }
>>
>> +static bool need_rebuild_rd = true;
>> +
>> static void rebuild_root_domains(void)
>> {
>> struct cpuset *cs = NULL;
>> @@ -1088,6 +1090,9 @@ static void rebuild_root_domains(void)
>> lockdep_assert_cpus_held();
>> lockdep_assert_held(&sched_domains_mutex);
>>
>> + if (!need_rebuild_rd)
>> + return;
>> +
>> rcu_read_lock();
>>
>> /*
>> @@ -3627,7 +3632,9 @@ static void cpuset_hotplug_workfn(struct
>> work_struct *work)
>> /* rebuild sched domains if cpus_allowed has changed */
>> if (cpus_updated || force_rebuild) {
>> force_rebuild = false;
>> + need_rebuild_rd = cpus_updated || (force_rebuild
>> && !cpuhp_tasks_frozen);
>> rebuild_sched_domains();
>> + need_rebuild_rd = true;
>
> You do the force_check check after it is set to false in the previous
> statement which is definitely not correct. So it will be false
> whenever cpus_updated is false.
>
> If you just want to skip rebuild_sched_domains() call for hotplug, why
> don't just skip the call here if the condition is right? Like
>
> /* rebuild sched domains if cpus_allowed has changed */
> if (cpus_updated || (force_rebuild && !cpuhp_tasks_frozen)) {
> force_rebuild = false;
> rebuild_sched_domains();
> }
>
> Still, we will need to confirm that cpuhp_tasks_frozen will be cleared
> outside of the suspend/resume cycle.
BTW, you also need to expand the comment to explain why we need to check
for cpuhp_tasks_frozen.
Cheers,
Longman
Powered by blists - more mailing lists