[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20111201140749.GB4269@tiehlicka.suse.cz>
Date: Thu, 1 Dec 2011 15:07:49 +0100
From: Michal Hocko <mhocko@...e.cz>
To: "Rafael J. Wysocki" <rjw@...k.pl>
Cc: "Artem S. Tashkinov" <t.artem@...os.com>, pomac@...or.com,
linux-kernel@...r.kernel.org, tino.keitel@...ei.de,
Len Brown <lenb@...nel.org>
Subject: Re: [REGRESSION] [Linux 3.2] top/htop and all other CPU usage
[Let's add Len to the CC for idle driver]
On Wed 30-11-11 20:56:54, Rafael J. Wysocki wrote:
> On Wednesday, November 30, 2011, Michal Hocko wrote:
> > On Tue 29-11-11 23:51:16, Rafael J. Wysocki wrote:
> > > On Tuesday, November 29, 2011, Michal Hocko wrote:
[...]
> > > > I haven't found any intel_idle machine in my lab so far and all other
> > > > acpi_idle machines seem to work (or at least randomly selected ones) so
> > > > this smells like a major difference in the setup.
> > >
> > > I'm able to reproduce that with acpi_driver on one box, but not on demand.
> >
> > And do you see the same thing (no idle/io_wait) updates?
>
> Actaully, I was wrong. The box I'm seeing the issue on also has "none"
> in /sys/devices/system/cpu/cpuidle/current_driver. Sorry for the confusion.
OK. So we have seen the issue only with intel_idle and none drivers so
far. acpi_idle which is at my machines works just fine.
I think we should focus on those drivers.
To summarize this issue.
Users are seeing weird values reported by [h]top. CPUs seem to be at
100% even though there is nothing hogging them. /proc/stat collected
data on the affected system shown that idle/io_wait are not accounted
properly.
It has been identified that problem disappears if a25cac51 [proc:
Consider NO_HZ when printing idle and iowait times] is reverted.
This patch fixes a bug when idle/io_wait times are not repororted
properly when a CPU is tickless. It relies on get_cpu_idle_time_us
which either reports sched_time idle_sleeptime or
(idle_sleeptime + now-idle_entrytime) if we are idle at the moment.
While implementation is not race free (we better not use locks in that
path...) so we might race:
E.g.
CPU1 CPU2
now = ktime_get
tick_nohz_start_idle
ts->idle_entrytime = now;
if (ts->idle_active)
ts->idle_active = 1
[...]
return idle_sleeptime
But this is OK because sleeptime will be more or less accurate. We just
skip few ticks.
It would be worse if we had a race like:
CPU1 CPU2
now = ktime_get
tick_nohz_start_idle
now = ktime_get
update_ts_time_stats()
ts->idle_entrytime = now;
ts->idle_active = 1
if (ts->idle_active)
delta = ktime_sub(now, idle_entrytime)
ktime_add(idle_sleeptime, delta)
In this case we might get an overflow from ktime_sub but AFAIU the
ktime_* magic the overflow should cause to get smaller idle_sleeptime
in the end after ktime_add (we do not add a small number but rather
subtract it), right?
So it shouldn't be a big deal as well.
So the question is. What is the role of the idle driver here?
Thanks
--
Michal Hocko
SUSE Labs
SUSE LINUX s.r.o.
Lihovarska 1060/12
190 00 Praha 9
Czech Republic
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists