[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <91e88019-52f7-4fa6-a14b-ca5ecb8e63cf@huawei.com>
Date: Mon, 22 Jul 2024 11:47:01 +0800
From: Zhang Qiao <zhangqiao22@...wei.com>
To: Chuyi Zhou <zhouchuyi@...edance.com>, <mingo@...hat.com>,
<peterz@...radead.org>, <juri.lelli@...hat.com>,
<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
<vschneid@...hat.com>
CC: <chengming.zhou@...ux.dev>, <linux-kernel@...r.kernel.org>,
<joshdon@...gle.com>
Subject: Re: [PATCH 1/2] sched/fair: Decrease cfs bandwidth usage in
task_group destruction
Hi, Chuyi
在 2024/7/21 20:52, Chuyi Zhou 写道:
> The static key __cfs_bandwidth_used is used to indicate whether bandwidth
> control is enabled in the system. Currently, it is only decreased when a
> task group disables bandwidth control. This is incorrect because if there
> was a task group in the past that enabled bandwidth control, the
> __cfs_bandwidth_used will never go to zero, even if there are no task_group
> using bandwidth control now.
>
> This patch tries to fix this issue by decrsasing bandwidth usage in
> destroy_cfs_bandwidth().
>
> Signed-off-by: Chuyi Zhou <zhouchuyi@...edance.com>
> ---
> kernel/sched/fair.c | 3 +++
> 1 file changed, 3 insertions(+)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index b1e07ce90284..7ad50dc31a93 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6447,6 +6447,9 @@ static void destroy_cfs_bandwidth(struct cfs_bandwidth *cfs_b)
> hrtimer_cancel(&cfs_b->period_timer);
> hrtimer_cancel(&cfs_b->slack_timer);
>
> + if (cfs_b->quota != RUNTIME_INF)
> + cfs_bandwidth_usage_dec();
This calls static_key_slow_dec_cpuslocked, but destroy_cfs_bandwidth
isn't holding the hotplug lock [1].
For fixing this issue, i also sent a patch, but it be not merged into mainline [2].
[1]: https://lore.kernel.org/all/20210712162655.w3j6uczwbfkzazvt@oracle.com/
[2]: https://lore.kernel.org/all/20210910094139.184582-1-zhangqiao22@huawei.com/
Thanks,
--
Qiao Zhang.
> +
> /*
> * It is possible that we still have some cfs_rq's pending on a CSD
> * list, though this race is very rare. In order for this to occur, we
Powered by blists - more mailing lists