lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0905110656180.6983@venus.araneidae.co.uk>
Date:	Mon, 11 May 2009 07:23:58 +0100 (BST)
From:	Michael Abbott <michael@...neidae.co.uk>
To:	Jan Engelhardt <jengelh@...ozas.de>
cc:	Martin Schwidefsky <schwidefsky@...ibm.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: [PATCH] Re: /proc/uptime idle counter remains at 0

On Mon, 11 May 2009, Jan Engelhardt wrote:
> On Sunday 2009-05-10 19:12, Martin Schwidefsky wrote:
> >> So, were the updates to uptime.c missed, or do we now live on with 
> >> /proc/uptime constantly having 0?

Please, let's not do this -- it breaks my instrument (which currently 
thinks the processor is overloaded).

> >The second paragraph from git commit 79741dd tells you more about this:
> >
> >In addition idle time is no more added to the stime of the idle 
> >process. This field now contains the system time of the idle process as 
> >it should be. On systems without VIRT_CPU_ACCOUNTING this will always 
> >be zero as every tick that occurs while idle is running will be 
> >accounted as idle time.
> >
> >The point is the semantics of the stime field for the idle process. The 
> >stime field used to contain the real system time (cpu really did 
> >something) of the idle process plus the idle time (cpu is stopped). 
> >After the change the field only contains the real system time. Which is 
> >ihmo much more useful, no?
> 
> Actually doing something while idle would then probably be limited to 
> CPUs that have no HLT instruction/state, like ancient i386, right?
> 
> Are the semantics of /proc/uptime (more-or-less standardsly) defined 
> somewhere, e.g. written down into a manual page?
> 
> Nevertheless, one could argue that, hypothetically, some people or their 
> scripts interpreted the second field as the time that there was no 
> process running; sort of a minimalistic way to tell the average system 
> use in % beyond the 1/5/15-loadavg counters. So the field could be kept, 
> or now that 2nd place displays 0.00, be re-added. Depending on how 
> “standardized” /proc/uptime's format is, the 0.00 could either stay 
> as second position or move to third position.

I have to confess I don't really understand the logic of what's going on 
here -- in particular, what does the idle process do other than account 
for time when the processor has nothing useful to do?  It does seem to me 
now that the .utime and .stime fields are now less than useful -- maybe 
they can be deleted now?

I've always assumed that the second field of /proc/uptime was a simple 
measure of time not spent doing real work, in other words a crude measure 
of spare CPU resources.  My instrument basically uses the the two fields 
of this file to compute a measure of CPU loading so it can raise an alert 
if the CPU doesn't have enough spare (idle) capacity.

So as a simple solution, I've attached a patch where I just copy the idle 
field processing from fs/proc/stat.c.  I expect that on a multi-processor 
machine things may not be quite so simple -- as up time is in elapsed 
wall-clock time, then so should idle time be, so we probably need to also 
divide by the number of processors.  Afraid I don't have a multiprocessor 
test system, and /proc/stat seems ok, so I've not made this refinement.


Date: Mon, 11 May 2009 07:14:19 +0100
Subject: [PATCH] Fix idle time field in /proc/uptime

Git commit 79741dd changes idle cputime accounting, but unfortunately
the /proc/uptime file hasn't caught up.  Here the idle time calculation
from /proc/stat is copied over.

Signed-off-by: Michael Abbott <michael.abbott@...mond.ac.uk>
---
 fs/proc/uptime.c |    9 +++++++--
 1 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/fs/proc/uptime.c b/fs/proc/uptime.c
index df26aa8..0d531bf 100644
--- a/fs/proc/uptime.c
+++ b/fs/proc/uptime.c
@@ -2,6 +2,7 @@
 #include <linux/proc_fs.h>
 #include <linux/sched.h>
 #include <linux/time.h>
+#include <linux/kernel_stat.h>
 #include <asm/cputime.h>
 
 static int proc_calc_metrics(char *page, char **start, off_t off,
@@ -23,8 +24,12 @@ static int uptime_read_proc(char *page, char **start, off_t off, int count,
 {
 	struct timespec uptime;
 	struct timespec idle;
-	int len;
-	cputime_t idletime = cputime_add(init_task.utime, init_task.stime);
+	int len, i;
+	cputime_t idletime = 0;
+
+	for_each_possible_cpu(i) 
+		idletime = cputime64_add(idletime, kstat_cpu(i).cpustat.idle);
+	idletime = cputime64_to_clock_t(idletime);
 
 	do_posix_clock_monotonic_gettime(&uptime);
 	monotonic_to_bootbased(&uptime);
-- 
1.6.1.3

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ