[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <4902f7d4-c6ee-bc29-dd7f-282d19d0b3b2@gmail.com>
Date: Wed, 15 Oct 2025 14:31:27 +0800
From: Hao Jia <jiahao.kernel@...il.com>
To: Aaron Lu <ziqianlu@...edance.com>
Cc: Valentin Schneider <vschneid@...hat.com>, Ben Segall
<bsegall@...gle.com>, K Prateek Nayak <kprateek.nayak@....com>,
Peter Zijlstra <peterz@...radead.org>,
Chengming Zhou <chengming.zhou@...ux.dev>, Josh Don <joshdon@...gle.com>,
Ingo Molnar <mingo@...hat.com>, Vincent Guittot
<vincent.guittot@...aro.org>, Xi Wang <xii@...gle.com>,
linux-kernel@...r.kernel.org, Juri Lelli <juri.lelli@...hat.com>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Steven Rostedt <rostedt@...dmis.org>, Mel Gorman <mgorman@...e.de>,
Chuyi Zhou <zhouchuyi@...edance.com>, Jan Kiszka <jan.kiszka@...mens.com>,
Florian Bezdeka <florian.bezdeka@...mens.com>,
Songtang Liu <liusongtang@...edance.com>, Chen Yu <yu.c.chen@...el.com>,
Matteo Martelli <matteo.martelli@...ethink.co.uk>,
Michal Koutný <mkoutny@...e.com>,
Sebastian Andrzej Siewior <bigeasy@...utronix.de>
Subject: Re: [PATCH] sched/fair: Prevent cfs_rq from being unthrottled with
zero runtime_remaining
On 2025/10/15 10:51, Aaron Lu wrote:
> On Wed, Oct 15, 2025 at 09:43:20AM +0800, Hao Jia wrote:
> ... ...
>> Yes, I've already hit the cfs_rq->runtime_remaining < 0 condition in
>> tg_unthrottle_up().
>>
>> This morning, after applying your patch, I still get the same issue.
>> However, As before, because cfs_rq->curr isn't NULL,
>> check_enqueue_throttle() returns prematurely, preventing the triggering of
>> throttle_cfs_rq().
>>
>>
>> Some information to share with you.
>
> Can you also share your cgroup setup and related quota setting etc. and
> how to trigger it? Thanks.
I ran some internal workloads on my test machine with different quota
settings, and added 10 sched messaging branchmark cgroups, setting their
cpu.max to 1000 100000.
perf bench sched messaging -g 10 -t -l 50000 &
I'm not sure if the issue can be reproduced without these internal
workloads.
Thanks,
Hao
Powered by blists - more mailing lists