lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20090928191506.40b61793@mschwide.boeblingen.de.ibm.com>
Date:	Mon, 28 Sep 2009 19:15:06 +0200
From:	Martin Schwidefsky <schwidefsky@...ibm.com>
To:	Linus Torvalds <torvalds@...ux-foundation.org>
Cc:	Eric Dumazet <eric.dumazet@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	John Stultz <johnstul@...ibm.com>,
	Linux Kernel Mailing List <linux-kernel@...r.kernel.org>
Subject: Re: Linux 2.6.32-rc1

On Mon, 28 Sep 2009 08:39:31 -0700 (PDT)
Linus Torvalds <torvalds@...ux-foundation.org> wrote:

> On Mon, 28 Sep 2009, Eric Dumazet wrote:
> > Linus Torvalds a écrit :
> > > 
> > > Go wild, test it out, and let us know about any regressions you find,
> > 
> > Something seems wrong with process time accounting.
> > 
> > Following program should consume 10*8 seconds of cpu on a 8 cpu machine, but
> > with 2.6.32-rc1 numbers are crazy.
> 
> Ok, so the top-level process time looks sane _while_ the thing is running 
> (sum of all threads), but the per-thread times look broken (as if they had 
> randomly had the times of the other threads added into them - looks to me 
> like they on average had half the other threads' time accounted into 
> them).
> 
> And then at the end, it looks like the time of the threads (which was 
> over-accounted) get re-accounted back into the main process, so the final 
> time is indeed wildly inflated.
> 
> IOW, looks like we're adding thread times multiple times to other threads 
> (and then finally to the parent).
> 
> I'm adding the usual suspects to the cc, and leaving your results and 
> test-program quoted here for them.. Thomas, Martin, John - if you're not 
> the people I should blame for this, let me know.

Uh-oh.. usual suspects, eh?

For starters I run the test program on an s390 with
VIRT_CPU_ACCOUNTING=y:

time ./burn-cpu 
  PID TTY      STAT   TIME COMMAND 
 2979 pts/0    -      0:08 ./burn-cpu
    - -        Sl+    0:00 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
    - -        Rl+    0:01 -         
  PID TTY      STAT   TIME COMMAND   
 2979 pts/0    -      0:16 ./burn-cpu
    - -        Sl+    0:00 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
    - -        Rl+    0:02 -         
  PID TTY      STAT   TIME COMMAND   
 2979 pts/0    -      0:25 ./burn-cpu
    - -        Sl+    0:00 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
    - -        Rl+    0:03 -         
  PID TTY      STAT   TIME COMMAND   
 2979 pts/0    -      0:33 ./burn-cpu
    - -        Sl+    0:00 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
    - -        Rl+    0:04 -         
  PID TTY      STAT   TIME COMMAND   
 2979 pts/0    -      0:41 ./burn-cpu
    - -        Sl+    0:00 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
    - -        Rl+    0:05 -         
  PID TTY      STAT   TIME COMMAND   
 2979 pts/0    -      0:50 ./burn-cpu
    - -        Sl+    0:00 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
    - -        Rl+    0:06 -
  PID TTY      STAT   TIME COMMAND
 2979 pts/0    -      0:58 ./burn-cpu
    - -        Sl+    0:00 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
    - -        Rl+    0:07 -
  PID TTY      STAT   TIME COMMAND
 2979 pts/0    -      1:07 ./burn-cpu
    - -        Sl+    0:00 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
    - -        Rl+    0:08 -
  PID TTY      STAT   TIME COMMAND
 2979 pts/0    -      1:15 ./burn-cpu
    - -        Sl+    0:00 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
    - -        Rl+    0:09 -
  PID TTY      STAT   TIME COMMAND
 2979 pts/0    -      1:25 ./burn-cpu
    - -        S+     1:25 -

real    0m10.708s
user    1m25.051s
sys     0m0.174s

looks better, gives an average of 10.63 seconds per thread and the per
thread numbers are fine. I'll see what happens with the test case on my
pc@...e.

-- 
blue skies,
   Martin.

"Reality continues to ruin my life." - Calvin.

--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ