[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <04ffa7ce-85ba-431b-91ab-f725f31b03ed@amd.com>
Date: Tue, 10 Feb 2026 23:39:14 +0530
From: K Prateek Nayak <kprateek.nayak@....com>
To: Doug Smythies <dsmythies@...us.net>, 'Peter Zijlstra'
<peterz@...radead.org>
CC: <mingo@...nel.org>, <juri.lelli@...hat.com>, <vincent.guittot@...aro.org>,
<dietmar.eggemann@....com>, <rostedt@...dmis.org>, <bsegall@...gle.com>,
<mgorman@...e.de>, <vschneid@...hat.com>, <linux-kernel@...r.kernel.org>,
<wangtao554@...wei.com>, <quzicheng@...wei.com>, <wuyun.abel@...edance.com>
Subject: Re: [PATCH 0/4] sched: Various reweight_entity() fixes
Hello Doug,
On 2/10/2026 9:11 PM, Doug Smythies wrote:
> My test computer also hung under the heavy heavy load test,
> albeit at a higher load than before.
> There was no log information that I could find after the re-boot.
Could you run the same scenario with PARANOID_AVG:
echo PARANOID_AVG > /sys/kernel/debug/sched/features
and once you are past the point when the system would have usually
hung, can check if the "sum_shifts" reported for cfs_rq in the
debugfs have changed to some non-zero value:
grep "shift.*: [^0]$" /sys/kernel/debug/sched/debug
I'm assuming this is the same "yes" x 12500 copies bomb that failed.
Let me see if I can reproduce this on my setup by leaving it going
overnight on a limited cpuset.
Since you mentioned there is some bound to the number of copies when
the hang is observed, can you please share your system details and
the number of CPUs it has?
--
Thanks and Regards,
Prateek
Powered by blists - more mailing lists