[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <Pine.LNX.4.64.0702141023000.672@home.oyster.ru>
Date: Wed, 14 Feb 2007 10:28:46 +0300 (MSK)
From: malc <av1474@...tv.ru>
To: Con Kolivas <kernel@...ivas.org>
cc: Pavel Machek <pavel@....cz>, linux-kernel@...r.kernel.org
Subject: Re: CPU load
On Wed, 14 Feb 2007, Con Kolivas wrote:
> On Wednesday 14 February 2007 09:01, malc wrote:
>> On Mon, 12 Feb 2007, Pavel Machek wrote:
>>> Hi!
[..snip..]
>>> I have (had?) code that 'exploits' this. I believe I could eat 90% of cpu
>>> without being noticed.
>>
>> Slightly changed version of hog(around 3 lines in total changed) does that
>> easily on 2.6.18.3 on PPC.
>>
>> http://www.boblycat.org/~malc/apc/load-hog-ppc.png
>
> I guess it's worth mentioning this is _only_ about displaying the cpu usage to
> userspace, as the cpu scheduler knows the accounting of each task in
> different ways. This behaviour can not be used to exploit the cpu scheduler
> into a starvation situation. Using the discrete per process accounting to
> accumulate the displayed values to userspace would fix this problem, but
> would be expensive.
Guess you are right, but, once again, the problem is not so much about
fooling the system to do something or other, but confusing the user:
a. Everything is fine - the load is 0%, the fact that the system is
overheating and/or that some processes do not do as much as they
could is probably due to the bad hardware.
b. The weird load pattern must be the result of bugs in my code.
(And then a whole lot of time/effort is poured into fixing the
problem which is simply not there)
The current situation ought to be documented. Better yet some flag can
be introduced somewhere in the system so that it exports realy values to
/proc, not the estimations that are innacurate in some cases (like hog)
--
vale
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
Please read the FAQ at http://www.tux.org/lkml/
Powered by blists - more mailing lists