[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250917093441.GU3419281@noisy.programming.kicks-ass.net>
Date: Wed, 17 Sep 2025 11:34:41 +0200
From: Peter Zijlstra <peterz@...radead.org>
To: John Stultz <jstultz@...gle.com>
Cc: Juri Lelli <juri.lelli@...hat.com>, LKML <linux-kernel@...r.kernel.org>,
Ingo Molnar <mingo@...hat.com>,
Vincent Guittot <vincent.guittot@...aro.org>,
Dietmar Eggemann <dietmar.eggemann@....com>,
Valentin Schneider <vschneid@...hat.com>,
Steven Rostedt <rostedt@...dmis.org>,
Ben Segall <bsegall@...gle.com>, Mel Gorman <mgorman@...e.de>,
Xuewen Yan <xuewen.yan94@...il.com>,
K Prateek Nayak <kprateek.nayak@....com>,
Suleiman Souhlal <suleiman@...gle.com>,
Qais Yousef <qyousef@...alina.io>,
Joel Fernandes <joelagnelf@...dia.com>,
kuyo chang <kuyo.chang@...iatek.com>, hupu <hupu.gm@...il.com>,
kernel-team@...roid.com
Subject: Re: [RFC][PATCH] sched/deadline: Fix dl_server getting stuck,
allowing cpu starvation
On Tue, Sep 16, 2025 at 08:29:01PM -0700, John Stultz wrote:
> The problem is that previously the tasks were spawned without much
> delay, but after your change, we see each RT tasks after the first
> NR_CPU take ~1 second to spawn. On systems with large cpu counts
> taking NR_CPU*player-types seconds to start can be long enough (easily
> minutes) that we hit hung task errors.
>
> As I've been digging, part of the issue seems to be kthreadd is not an
> RT task, thus after we create NR_CPU spinner threads, the following
> threads don't actually start right away. They have to wait until
> rt-throttling or the dl_server runs, allowing kthreadd to get cpu time
> and start the thread. But then we have to wait until throttling stops
> so that the RT prio Ref thread can run to set the new thread as RT
> prio and then spawn the next, which then won't start until we do
> throttling again, etc.
>
> So it makes sense if the dl_server is "refreshed" once a second, we
> might see this one thread per second interlock happening. So this
> seems likely just an issue with my test.
>
> However, prior to your change, it seemed like the the RT tasks would
> be throttled, kthreadd would run and the thread would start, then with
> no SCHED_NORMAL tasks to run, the dl_server would stop and we'd run
> the Ref, which would immediately eneueued kthreadd, which started the
> dl_server and since the dl_server hadn't done much, I'm guessing it
> still had some runtime/bandwidth and would run kthreadd then stop
> right after.
>
> So now my guess (and my apologies, as I really am not familiar enough
> the the DL details) is that since we're not stopping the dl_server, it
> seems like the dl_server runtime gets used up each cycle, even though
> it didn't do much boosting. So we have to wait the full cycle for a
> refresh.
Yes. This makes sense.
The old code would disable the dl_server when fair tasks drops to 0
so even though we had that yield in __pick_task_dl(), we'd never hit it.
So the moment another fair task shows up (0->1) we re-enqueue the
dl_server (using update_dl_entity() / CBS wakeup rules) and continue
consuming bandwidth.
However, since we're now not stopping the thing, we hit that yield,
getting this pretty terrible behaviour where we will only run fair tasks
until there are none and then yield our entire period, forcing another
task to wait until the next cycle.
Let me go have a play, surely we can do better.
Powered by blists - more mailing lists