[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20250323140358.61c1ad10@batman.local.home>
Date: Sun, 23 Mar 2025 14:03:58 -0400
From: Steven Rostedt <rostedt@...dmis.org>
To: ying chen <yc1082463@...il.com>
Cc: "mingo@...hat.com" <mingo@...hat.com>, peterz@...radead.org,
juri.lelli@...hat.com, vincent.guittot@...aro.org,
linux-kernel@...r.kernel.org, dietmar.eggemann@....com, bsegall@...gle.com,
mgorman@...e.de, bristot@...hat.com, vschneid@...hat.com
Subject: Re: [bug report, 6.1.52] /proc/loadavg shows incorrect values
On Sun, 23 Mar 2025 20:45:51 +0800
ying chen <yc1082463@...il.com> wrote:
> Hello everyone. Have you ever encountered a similar situation?
>
> On Tue, Mar 18, 2025 at 9:54 PM ying chen <yc1082463@...il.com> wrote:
> >
> > Hello all,
> >
> > In our production environment, "cat /proc/loadavg" shows incorrect
> > huge values. The kernel version is 6.1.52. So far, at least four such
> > cases have been found. It seems to be a kernel bug.
> >
> > ~$ cat /proc/loadavg
> > 4294967392.49 4294967395.80 4294967395.83 87/16100 2341720
> >
> > top output is below:
> >
> > top - 21:12:13 up 191 days, 20:50, 1 user, load average:
> > 4294967397.45, 4294967396.82, 4294967396.15
4294967397 = 0x100000065
Which looks like some calculation overflowed.
191 day uptime is quite long (I reboot to update my kernel every
month). Perhaps there's something there that caused an overflow.
Interestingly in 5.14, some values were converted from long to int. Not
sure if there was anything there that could have caused this.
Just something to look at.
-- Steve
> > Tasks: 2388 total, 3 running, 1208 sleeping, 0 stopped, 0 zombie
> > %Cpu(s): 27.9 us, 6.7 sy, 0.0 ni, 57.3 id, 0.5 wa, 1.7 hi, 5.8 si, 0.0 st
> > KiB Mem : 99966995+total, 56704217+free, 22655678+used, 20607096+buff/cache
> > KiB Swap: 0 total, 0 free, 0 used. 68817177+avail Mem
Powered by blists - more mailing lists