[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <xm26ttghysq0.fsf@google.com>
Date: Mon, 22 Jul 2024 15:20:07 -0700
From: Benjamin Segall <bsegall@...gle.com>
To: Zhang Qiao <zhangqiao22@...wei.com>
Cc: Chuyi Zhou <zhouchuyi@...edance.com>, <mingo@...hat.com>,
<peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <mgorman@...e.de>, <vschneid@...hat.com>,
<chengming.zhou@...ux.dev>, <linux-kernel@...r.kernel.org>,
<joshdon@...gle.com>
Subject: Re: [PATCH 1/2] sched/fair: Decrease cfs bandwidth usage in
task_group destruction
Zhang Qiao <zhangqiao22@...wei.com> writes:
> 在 2024/7/22 15:46, Chuyi Zhou 写道:
>>>>
>>>> Thanks for your information.
>>>>
>>>> I think maybe cfs_bandwidth_usage_dec() should be moved to other more suitable places where could
>>>> hold hotplug lock(e.g. cpu_cgroup_css_released()). I would do some test to verify it.
>>>>
>>>
>>> The cpu_cgroup_css_released() also doesn't seem to be in the cpu hotplug lock-holding context.
>>>
>>
>> IIUC, cpus_read_lock/cpus_read_unlock can be called in cpu_cgroup_css_released() right? But cfs
>> bandwidth destroy maybe run in a rcu callback since task group list is protected by RCU so we could not
>> get the lock. Did I miss something important?
>
>
> Okay, you're right. I ignored that we can't hold the hotplug lock in an rcu callback.
Yeah, cpu_cgroup_css_released/cpu_cgroup_css_free are fine I think, and
I think it should be correct to move the call to destroy_cfs_bandwidth() to
cpu_cgroup_css_free (it's unfortunate in terms of code organization, but
as far as correctness goes it should be fine).
As far as the diff goes, the _dec should go after the
__cfsb_csd_unthrottle loop.
Powered by blists - more mailing lists