lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <f361f595-f703-358b-4785-4b81df1d3269@huawei.com>
Date: Fri, 9 Aug 2024 17:06:40 +0800
From: "zhaowenhui (A)" <zhaowenhui8@...wei.com>
To: <mingo@...hat.com>, <peterz@...radead.org>, <juri.lelli@...hat.com>,
	<vincent.guittot@...aro.org>, <dietmar.eggemann@....com>,
	<rostedt@...dmis.org>, <bsegall@...gle.com>, <mgorman@...e.de>,
	<bristot@...hat.com>, <vschneid@...hat.com>, <linux-kernel@...r.kernel.org>
CC: <tanghui20@...wei.com>
Subject: Re: [PATCH] sched/rt: Fix rt_runtime leaks with cpu hotplug and
 RT_RUNTIME_SHARE



在 2024/6/3 17:00, zhaowenhui (A) 写道:
> Friendly Ping.
> 
> Regards,
> Zhao Wenhui
> 
> On 2024/5/24 11:42, Zhao Wenhui Wrote:
>> When using cgroup rt_bandwidth with RT_RUNTIME_SHARE, if there are cpu
>> hotplug and cpu.rt_runtime_us changing concurrently, the warning in
>> __disable_runtime may occur:
>> [  991.697692] WARNING: CPU: 0 PID: 49573 at kernel/sched/rt.c:802
>> rq_offline_rt+0x24d/0x260
>> [  991.697795] CPU: 0 PID: 49573 Comm: kworker/1:0 Kdump: loaded Not
>> tainted 6.9.0-rc1+ #4
>> [  991.697800] Workqueue: events cpuset_hotplug_workfn
>> [  991.697803] RIP: 0010:rq_offline_rt+0x24d/0x260
>> [  991.697825] Call Trace:
>> [  991.697827]  <TASK>
>> [  991.697858]  set_rq_offline.part.125+0x2d/0x70
>> [  991.697864]  rq_attach_root+0xda/0x110
>> [  991.697867]  cpu_attach_domain+0x433/0x860
>> [  991.697880]  partition_sched_domains_locked+0x2a8/0x3a0
>> [  991.697885]  rebuild_sched_domains_locked+0x608/0x800
>> [  991.697895]  rebuild_sched_domains+0x1b/0x30
>> [  991.697897]  cpuset_hotplug_workfn+0x4b6/0x1160
>> [  991.697909]  process_scheduled_works+0xad/0x430
>> [  991.697917]  worker_thread+0x105/0x270
>> [  991.697922]  kthread+0xde/0x110
>> [  991.697928]  ret_from_fork+0x2d/0x50
>> [  991.697935]  ret_from_fork_asm+0x11/0x20
>> [  991.697940]  </TASK>
>> [  991.697941] ---[ end trace 0000000000000000 ]---
>>
>> That's how it happens:
>> CPU0                                   CPU1
>> -----                                  -----
>>
>> set_rq_offline(rq)
>>      __disable_runtime(rq) (1)
>>                                        tg_set_rt_bandwidth (2)
>>                                        do_balance_runtime  (3)
>> set_rq_online(rq)
>>      __enable_runtime(rq)  (4)
>>
>> In step(1) rt_rq->rt_runtime is set to RUNTIME_INF, and this rtrq's
>> runtime is not supposed to change until its rq gets online. However,
>> in step(2) tg_set_rt_bandwidth can set rt_rq->rt_runtime to
>> rt_bandwidth.rt_runtime. Then, in step(3) rtrq's runtime is not
>> RUNTIME_INF, so others can borrow rt_runtime from it. Finally, in
>> step(4) the rq gets online, so its rtrq's runtime is set to
>> rt_bandwidth.rt_runtime again, and Since then the total rt_runtime in
>> the domain is increased by this way. After these steps, when offline cpu,
>> rebuilding the sched_domain will offline all rq, and the last rq will
>> find the rt_runtime is increased but nowhere to return.
>>
>> To fix this, we can add a state RUNTIME_DISABLED, which means the runtime
>> is disabled and should not be used. When rq get offline, we can set its
>> rtrq's rt_runtime to RUNTIME_DISABLED, and when rq get online, reset it.
>> And in tg_set_rt_bandwidth and do_balance_runtime, never change a
>> disabled rt_runtime.
>>
>> Fixes: 7def2be1dc67 ("sched: fix hotplug cpus on ia64")
>> Closes: 
>> https://lore.kernel.org/all/47b4a790-9a27-2fc5-f2aa-f9981c6da015@huawei.com/
>> Co-developed-by: Hui Tang <tanghui20@...wei.com>
>> Signed-off-by: Hui Tang <tanghui20@...wei.com>
>> Signed-off-by: Zhao Wenhui <zhaowenhui8@...wei.com>
>> ---
>>   kernel/sched/rt.c    | 15 +++++++++------
>>   kernel/sched/sched.h |  5 +++++
>>   2 files changed, 14 insertions(+), 6 deletions(-)
>>
>> diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
>> index aa4c1c874fa4..44b8cc5a2f5f 100644
>> --- a/kernel/sched/rt.c
>> +++ b/kernel/sched/rt.c
>> @@ -704,7 +704,8 @@ static void do_balance_runtime(struct rt_rq *rt_rq)
>>            * or __disable_runtime() below sets a specific rq to inf to
>>            * indicate its been disabled and disallow stealing.
>>            */
>> -        if (iter->rt_runtime == RUNTIME_INF)
>> +        if (iter->rt_runtime == RUNTIME_INF ||
>> +                iter->rt_runtime == RUNTIME_DISABLED)
>>               goto next;
>>           /*
>> @@ -775,7 +776,9 @@ static void __disable_runtime(struct rq *rq)
>>               /*
>>                * Can't reclaim from ourselves or disabled runqueues.
>>                */
>> -            if (iter == rt_rq || iter->rt_runtime == RUNTIME_INF)
>> +            if (iter == rt_rq ||
>> +                    iter->rt_runtime == RUNTIME_INF ||
>> +                    iter->rt_runtime == RUNTIME_DISABLED)
>>                   continue;
>>               raw_spin_lock(&iter->rt_runtime_lock);
>> @@ -801,10 +804,9 @@ static void __disable_runtime(struct rq *rq)
>>           WARN_ON_ONCE(want);
>>   balanced:
>>           /*
>> -         * Disable all the borrow logic by pretending we have inf
>> -         * runtime - in which case borrowing doesn't make sense.
>> +         * Disable all the borrow logic by marking runtime disabled.
>>            */
>> -        rt_rq->rt_runtime = RUNTIME_INF;
>> +        rt_rq->rt_runtime = RUNTIME_DISABLED;
>>           rt_rq->rt_throttled = 0;
>>           raw_spin_unlock(&rt_rq->rt_runtime_lock);
>>           raw_spin_unlock(&rt_b->rt_runtime_lock);
>> @@ -2827,7 +2829,8 @@ static int tg_set_rt_bandwidth(struct task_group 
>> *tg,
>>           struct rt_rq *rt_rq = tg->rt_rq[i];
>>           raw_spin_lock(&rt_rq->rt_runtime_lock);
>> -        rt_rq->rt_runtime = rt_runtime;
>> +        if (rt_rq->rt_runtime != RUNTIME_DISABLED)
>> +            rt_rq->rt_runtime = rt_runtime;
>>           raw_spin_unlock(&rt_rq->rt_runtime_lock);
>>       }
>>       raw_spin_unlock_irq(&tg->rt_bandwidth.rt_runtime_lock);
>> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
>> index a831af102070..c2ad9102b8fa 100644
>> --- a/kernel/sched/sched.h
>> +++ b/kernel/sched/sched.h
>> @@ -183,6 +183,11 @@ extern struct list_head asym_cap_list;
>>    */
>>   #define RUNTIME_INF        ((u64)~0ULL)
>> +/*
>> + * Single value that denotes runtime is disabled, and it should not 
>> be used.
>> + */
>> +#define RUNTIME_DISABLED    (-2ULL)
>> +
>>   static inline int idle_policy(int policy)
>>   {
>>       return policy == SCHED_IDLE;
> 

Ping.

Regards,
Zhao Wenhui

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ