[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <175301303716.263023.8607719725649129120.tglx@xen13>
Date: Sun, 20 Jul 2025 14:04:59 +0200 (CEST)
From: Thomas Gleixner <tglx@...utronix.de>
To: Linus Torvalds <torvalds@...ux-foundation.org>
Cc: linux-kernel@...r.kernel.org, x86@...nel.org
Subject: [GIT pull] sched/urgent for v6.16-rc7
Linus,
please pull the latest sched/urgent branch from:
git://git.kernel.org/pub/scm/linux/kernel/git/tip/tip.git sched-urgent-2025-07-20
up to: 36569780b0d6: sched: Change nr_uninterruptible type to unsigned long
A single fix for the scheduler. A recent commit changed the runqueue
counter nr_uninterruptible to an unsigned int. Due to the fact that the
counters are not updated on migration of a uninterruptble task to a
different CPU, these counters can exceed INT_MAX. The counter is cast to
long in the load average calculation, which means that the cast expands
into negative space resulting in bogus load average values. Convert it back
to unsigned long to fix this.
Thanks,
tglx
------------------>
Aruna Ramakrishna (1):
sched: Change nr_uninterruptible type to unsigned long
kernel/sched/loadavg.c | 2 +-
kernel/sched/sched.h | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/loadavg.c b/kernel/sched/loadavg.c
index c48900b856a2..52ca8e268cfc 100644
--- a/kernel/sched/loadavg.c
+++ b/kernel/sched/loadavg.c
@@ -80,7 +80,7 @@ long calc_load_fold_active(struct rq *this_rq, long adjust)
long nr_active, delta = 0;
nr_active = this_rq->nr_running - adjust;
- nr_active += (int)this_rq->nr_uninterruptible;
+ nr_active += (long)this_rq->nr_uninterruptible;
if (nr_active != this_rq->calc_load_active) {
delta = nr_active - this_rq->calc_load_active;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 475bb5998295..83e3aa917142 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1149,7 +1149,7 @@ struct rq {
* one CPU and if it got migrated afterwards it may decrease
* it on another CPU. Always updated under the runqueue lock:
*/
- unsigned int nr_uninterruptible;
+ unsigned long nr_uninterruptible;
union {
struct task_struct __rcu *donor; /* Scheduler context */
Powered by blists - more mailing lists