lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <f488382f0910070111x49fdb565p50d29786540bbb3f@mail.gmail.com>
Date:	Wed, 7 Oct 2009 01:11:38 -0700
From:	Steven Noonan <steven@...inklabs.net>
To:	ext-eero.nurkkala@...ia.com
Cc:	"linux-kernel@...r.kernel.org" <linux-kernel@...r.kernel.org>,
	Thomas Gleixner <tglx@...utronix.de>,
	Rik van Riel <riel@...hat.com>,
	Venkatesh Pallipadi <venkatesh.pallipadi@...el.com>
Subject: Re: [BISECTED] "conservative" cpufreq governor broken

On Wed, Oct 7, 2009 at 1:05 AM, Steven Noonan <steven@...inklabs.net> wrote:
> On Wed, Oct 7, 2009 at 12:49 AM, Eero Nurkkala
> <ext-eero.nurkkala@...ia.com> wrote:
>> On Wed, 2009-10-07 at 09:30 +0200, ext Steven Noonan wrote:
>>>
>>> Okay, wow, I'm a moron. I misread what cpu_idle() was intended to be
>>> for. I thought that cpu_idle() was a function that was periodically
>>> called whenever the CPU had nothing to do, but now I see that it's
>>> actually the main loop. I should really read the code next time.
>>>
>>> I've moved the statistics printout code to the _inside_ of that
>>> infinite loop and retested. I had it print every several hundred
>>> iterations. Here's the results (note the machine was idle the whole
>>> time, except for about the first 10-20 seconds while the machine
>>> booted):
>>>
>>> [    3.627716] timings[0]: 2250511125 / 3627716116
>>> [    6.946216] timings[0]: 4780901366 / 6946213531
>>> [   13.355182] timings[0]: 9385417604 / 13355183525
>>> [   18.551304] timings[1]: 16300853077 / 18551301189
>>> [   21.589039] timings[0]: 15984495433 / 21589037480
>>> [   47.152733] timings[1]: 44386121538 / 47152731476
>>> [   51.682630] timings[0]: 45713834076 / 51682628295
>>> [   79.587359] timings[0]: 73524821916 / 79587356820
>>> [   88.630110] timings[1]: 85324277596 / 88630109605
>>> [   96.082386] timings[0]: 89691306072 / 96082384539
>>>
>>
>> Those look good.
>>
>> Well, might as well then go for:
>> /drivers/cpufreq/cpufreq_conservative.c
>> dbs_check_cpu() ->
>> load = 100 * (wall_time - idle_time) / wall_time; <- What is your load?
>
> That's probably the problem...
>
> [   40.632277] cpufreq load = 100 * (66667 - 3310) / 66667 = 95
> [   40.698947] cpufreq load = 100 * (66661 - 3238) / 66661 = 95
> [   73.965425] cpufreq load = 100 * (66667 - 12820) / 66667 = 80
> [   74.032095] cpufreq load = 100 * (66661 - 1124) / 66661 = 98
> [  107.298571] cpufreq load = 100 * (66666 - 13092) / 66666 = 80
> [  107.365301] cpufreq load = 100 * (66722 - 3317) / 66722 = 95
> [  140.631717] cpufreq load = 100 * (66666 - 3311) / 66666 = 95
> [  140.698387] cpufreq load = 100 * (66662 - 3237) / 66662 = 95
>
> idle_time is wrong.

Actually, it's more likely that the idle_time there is correct and
there's something else that's going runaway. My system's fans are
running at about 4000 RPM, when they'd normally be running at 2000 RPM
for this load average. I suspect there's something actually going
wild.

>
>> Let assume load is sane, look for (in dbs_check_cpu())
>>        if (load < (dbs_tuners_ins.down_threshold - 10)) {
>>
>> whether it is taken ever...if not, what is your
>> (dbs_tuners_ins.down_threshold - 10) ?
>>
>> - Eero
>>
>>
>
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ