[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <alpine.DEB.1.10.0909101725130.25913@venus.araneidae.co.uk>
Date: Thu, 10 Sep 2009 17:27:57 +0100 (BST)
From: Michael Abbott <michael@...neidae.co.uk>
To: Martin Schwidefsky <schwidefsky@...ibm.com>
cc: Johan van Baarlen <vanbaajf@...all.nl>,
Andrew Morton <akpm@...ux-foundation.org>,
Jan Engelhardt <jengelh@...ozas.de>,
Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
Johan van Baarlen <jf@...baarlen.demon.nl>
Subject: Re: [PATCH] Re: /proc/uptime idle counter remains at 0
On Thu, 10 Sep 2009, Martin Schwidefsky wrote:
> On Thu, 10 Sep 2009 15:02:53 +0200
> "Johan van Baarlen" <vanbaajf@...all.nl> wrote:
> > with this patch the idle-time in /proc/uptime makes a lot more sense - but
> > it runs about a factor of 4 too fast (I'm thinking this is not coincidence
> > - I've got 4 cpu's in this box, and simply adding 4 idle timers means you
> > are going 4 times too fast).
> >
> > Can we just add idletime /= (i+1) after the foreachcpu loop, or am I
> > thinking too easy?
>
> With "/= (i+1)" you mean dividing the result by the number of cpus, no?
> That doesn't work because of that fact that the value used to contain
> the accumulated idle time of a uni-processor system and cpu hotplug. The
> only way to get meaningful numbers is to make the value contain the sum
> of the idle over all possible cpus. The user space tool that reads the
> value needs to take the number of currently active cpus into account.
I've never liked this solution, as it's hugely unfriendly and requires
access to detailed information which user space doesn't necessarily have:
how is any particular user space application supposed to know the detailed
history of cpu hotplug insertion and removal to accurately compute the
idle time?
On the other hand, I don't have an alternative to suggest...
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists