lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <b647ffbd0706050814u1e145b82qdb344d475d9ffe93@mail.gmail.com>
Date:	Tue, 5 Jun 2007 17:14:12 +0200
From:	"Dmitry Adamushko" <dmitry.adamushko@...il.com>
To:	"Matt Mackall" <mpm@...enic.com>
Cc:	"Ingo Molnar" <mingo@...e.hu>,
	"Rusty Russell" <rusty@...tcorp.com.au>, akpm@...ux-foundation.org,
	Linux-kernel@...r.kernel.org
Subject: Re: Interesting interaction between lguest and CFS

On 05/06/07, Matt Mackall <mpm@...enic.com> wrote:
> > [...]
> > Click into the lguest window and trigger the delay.
>
> I did:
>
> while true; do sleep 1; cat /proc/sched_debug > sched_debug.txt; done
>
> and got this, hopefully inside the window:
>
> Sched Debug Version: v0.02
> now at 257428593818894 nsecs
>
> cpu: 0
>   .nr_running            : 3
>   .raw_weighted_load     : 2063
>   .nr_switches           : 242830075
>   .nr_load_updates       : 30172063
>   .nr_uninterruptible    : 0
>   .jiffies               : 64282148
>   .next_balance          : 0
>   .curr->pid             : 27182
>   .clock                 : 125650217819008823
>   .prev_clock_raw        : 257428516403535

The delta (clock - prev_clock_raw) looks insane.

The current time (which doesn't depend on rq_clock() --> sched_clock() is
" now at 257428593818894 nsecs " (right at the beginning of the output).

'prev_clock_raw' is updated any time rq_clock() is called - basically
upon any scheduling operation (e.g. enqueue/dequeue)

now - prev_clock_raw == 257428593818894 - 257428516403535 == ~ 80 ms.

while 'clock' reports something crazy.. that would explain why there
wes a huge "block_max" reported earlier.. I guess, sched_clock() is
tsc-based in your case?

Any way to get it switched to jiffies-based one and repeat the test?


-- 
Best regards,
Dmitry Adamushko
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ