lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0706170018400.3414@linmac.oyster.ru>
Date:	Sun, 17 Jun 2007 00:31:58 +0400 (MSD)
From:	malc <av1474@...tv.ru>
To:	Ingo Molnar <mingo@...e.hu>
cc:	Balbir Singh <balbir@...ux.vnet.ibm.com>,
	linux-kernel@...r.kernel.org,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>
Subject: Re: [patch] sched: accurate user accounting

On Sat, 16 Jun 2007, Ingo Molnar wrote:

>
> * malc <av1474@...tv.ru> wrote:
>
>>> Interesting, the idle time accounting (done from
>>> account_system_time()) has not changed. Has your .config changed?
>>> Could you please send it across. I've downloaded apc and I am trying
>>> to reproduce your problem.
>>
>> http://www.boblycat.org/~malc/apc/cfs/ has config for 2.6.21 an the
>> diff against 2.6.21.4-cfs-v16.
>
> hm. Could you add this to your kernel boot command line:
>
>     highres=off nohz=off
>
> and retest, to inquire whether this problem is independent of any
> timer-events effects?

It certainly makes a difference. Without dynticks however scheduler
still moves the task (be it hog or mplayer) between CPUs for no good
reason, for hog the switching is every few seconds (instead of more or
less all the time in case of dynticks). What's rather strange is that
while it hogs 7x% of CPU on core#1 it only hogs 3x% on core#2
(percentage is obtained by timing idle handler not form /proc/stat,
according to /proc/stat either core is 0% loaded)..

Live report... After a while it stabilized and was running on core#2
all the time, when the process was stopped and restarted it started to
run constantly on core#1 (with ~70% load)

Btw. i don't want to waste anyones time here. All i originally wanted
is to know if something was done (as per LWN article) with increasing
the accuracy of system wide statistics (in /proc/stat), turns out that
nothing really happened in this area, but latest developments
(CFS/dynticks) have some peculiar effect on hog. Plus this constant
migration of hog/mplayer is somewhat strange.

Live report... again.. Okay now that hog stabilized on running
exclusively on core#1 and at 70% load i switched to the machine
where it runs and after just switching the windows in IceWM
resulted in system load dropping to 30%.. Quite reproducible too.

-- 
vale
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ