[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <5607f1e2-b235-4eda-a9d9-2e9519db3f74@huawei.com>
Date: Mon, 22 Jul 2024 16:16:12 +0800
From: Zhang Qiao <zhangqiao22@...wei.com>
To: Chuyi Zhou <zhouchuyi@...edance.com>, <mingo@...hat.com>,
<peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>
CC: <chengming.zhou@...ux.dev>, <linux-kernel@...r.kernel.org>,
<joshdon@...gle.com>
Subject: Re: [PATCH 1/2] sched/fair: Decrease cfs bandwidth usage in
task_group destruction
在 2024/7/22 15:46, Chuyi Zhou 写道:
>>>
>>> Thanks for your information.
>>>
>>> I think maybe cfs_bandwidth_usage_dec() should be moved to other more suitable places where could
>>> hold hotplug lock(e.g. cpu_cgroup_css_released()). I would do some test to verify it.
>>>
>>
>> The cpu_cgroup_css_released() also doesn't seem to be in the cpu hotplug lock-holding context.
>>
>
> IIUC, cpus_read_lock/cpus_read_unlock can be called in cpu_cgroup_css_released() right? But cfs
> bandwidth destroy maybe run in a rcu callback since task group list is protected by RCU so we could not
> get the lock. Did I miss something important?
Okay, you're right. I ignored that we can't hold the hotplug lock in an rcu callback.
Powered by blists - more mailing lists