[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111129081428.GC2675@tiehlicka.suse.cz>
Date: Tue, 29 Nov 2011 09:14:28 +0100
From: Michal Hocko <mhocko@...e.cz>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: Tino Keitel <tino.keitel@...ei.de>, linux-kernel@...r.kernel.org,
"Artem S. Tashkinov" <t.artem@...os.com>
Subject: Re: [REGRESSION] [Linux 3.2] top/htop and all other CPU usage
metering applications has gone crackers
On Mon 28-11-11 22:41:25, Michal Hocko wrote:
> Hi,
>
> On Mon 28-11-11 21:19:26, Rafael J. Wysocki wrote:
> > On Monday, November 28, 2011, Tino Keitel wrote:
> > > On Sun, Nov 27, 2011 at 12:45:57 +0100, Rafael J. Wysocki wrote:
> > > > On Sunday, November 27, 2011, Tino Keitel wrote:
> > > > > On Thu, Nov 24, 2011 at 21:05:53 +0100, Tino Keitel wrote:
> > > > > > On Thu, Nov 24, 2011 at 10:30:15 +0000, Artem S. Tashkinov wrote:
> > > > > > > Hello,
> > > > > > >
> > > > > > > I'd like to report a weird regression in Linux 3.2 (running rc3 now) - all CPU metering applications have gone terribly mad
> > > > > > > under this kernel:
> > > > > >
> > > > > > I get the same using top, htop and the gnome system monitor with kernel
> > > > > > 3.2 on a Sandy Bridge quad core box, running Debian unstable.
> > > > >
> > > > > I just tested 3.2-rc2, and see the same bug.
> > > >
> > > > I'm seeing that too on one of my test boxes, but not all the time
> > > > (i.e. there are periods in which the readings are correct). The other boxes
> > > > I've tested with 3.2-rc are fine in that respect.
> > > >
> > > > Also, it seems that it shows 100%-(real load) when it is wrong. So, it looks
> > > > like there's an overflow somewhere in the CPU load measuring code, at least
> > > > on some CPUs.
> > >
> > > Hi,
> > >
> > > I reverted this commit and so far it looks good:
> > >
> > > commit a25cac5198d4ff2842ccca63b423962848ad24b2
> > > Author: Michal Hocko <mhocko@...e.cz>
> > > Date: Wed Aug 24 09:40:25 2011 +0200
> > >
> > > proc: Consider NO_HZ when printing idle and iowait times
> > >
> > > I'll report back tomorrow how the kernel behaves.
> >
> > Hmm. Michal, can you have a look at that, please?
>
> Hmm, my testing didn't show anything like that. Could you post
> cat /proc/stat collected every second during 30s or so?
>
> Here is the output of my run with 3.2.0-rc3-00004-gdd38d29 and the attached config:
> for i in `seq 30`;
> do
> cat /proc/stat > `date +'%s'`
> sleep 1
> done
> export old_user=0 old_nice=0 old_sys=0 old_idle=0 old_iowait=0;
> grep cpu0 * | while read cpu user nice sys idle iowait rest;
> do
> echo $cpu $(($user-$old_user)) $(($nice-$old_nice)) $(($sys-$old_sys)) $(($idle-$old_idle)) $(($iowait-$old_iowait))
> old_user=$user old_nice=$nice old_sys=$sys old_idle=$idle old_iowait=$iowait
> done
>
Same results (attached) x86_64 with AMD 16CPUs in my lab with a
different cpuidle driver:
grep . -r /sys/devices/system/cpu/cpuidle/
$ /sys/devices/system/cpu/cpuidle/current_driver:none
/sys/devices/system/cpu/cpuidle/current_governor_ro:menu
$ grep . -r /sys/devices/system/cpu/cpufreq/
/sys/devices/system/cpu/cpufreq/ondemand/sampling_rate_min:10000
/sys/devices/system/cpu/cpufreq/ondemand/sampling_rate:38000
/sys/devices/system/cpu/cpufreq/ondemand/up_threshold:40
/sys/devices/system/cpu/cpufreq/ondemand/sampling_down_factor:1
/sys/devices/system/cpu/cpufreq/ondemand/ignore_nice_load:0
/sys/devices/system/cpu/cpufreq/ondemand/powersave_bias:0
/sys/devices/system/cpu/cpufreq/ondemand/io_is_busy:0
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
Download attachment "config.gz" of type "application/octet-stream" (33394 bytes)
Download attachment "amd_16cpus.tar.bz2" of type "application/octet-stream" (3958 bytes)
Powered by blists - more mailing lists