lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date:   Tue, 25 Apr 2017 23:26:00 +0200
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Doug Smythies <dsmythies@...us.net>
Cc:     "Rafael J. Wysocki" <rafael@...nel.org>,
        "Rafael J. Wysocki" <rjw@...ysocki.net>,
        Mel Gorman <mgorman@...hsingularity.net>,
        Rafael Wysocki <rafael.j.wysocki@...el.com>,
        Jörg Otte <jrg.otte@...il.com>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        Srinivas Pandruvada <srinivas.pandruvada@...ux.intel.com>
Subject: Re: Performance of low-cpu utilisation benchmark regressed severely
 since 4.6

On Tue, Apr 25, 2017 at 9:13 AM, Doug Smythies <dsmythies@...us.net> wrote:
> On 2017.04.24 07:25 Doug wrote:
>> On 2017.04.23 18:23 Srinivas Pandruvada wrote:
>>> On Mon, 2017-04-24 at 02:59 +0200, Rafael J. Wysocki wrote:
>>>> On Sun, Apr 23, 2017 at 5:31 PM, Doug Smythies <dsmythies@...us.net> wrote:
>>
>>>>> It looks like the cost is mostly related to moving the load from
>>>>> one CPU to
>>>>> another and waiting for the new one to ramp up then.
>>> Last time when we analyzed Mel's result last year this was the
>>> conclusion. The problem was more apparent on systems with per core P-
>>> state.
>>
>> ?? I have never seen this particular use case before.
>> Unless I have looked the wrong thing, Mel's issue last year was a
>> different use case.
>>
>> ...[cut]...
>>
>>>>>> We can do one more trick I forgot about.  Namely, if we are about
>>>>>> to increase
>>>>>> the P-state, we can jump to the average between the target and
>>>>>> the max
>>>>>> instead of just the target, like in the appended patch (on top of
>>>>>> linux-next).
>>>>>>
>>>>>> That will make the P-state selection really aggressive, so costly
>>>>>> energetically,
>>>>>> but it shoud small jumps of the average load above 0 to case big
>>>>>> jumps of
>>>>>> the target P-state.
>>>>> I'm already seeing the energy costs of some of this stuff.
>>>>> 3050.2 Seconds.
>>>> Is this with or without reducing the sampling interval?
>>
>> It was without reducing the sample interval.
>>
>> So, it was the branch you referred us to the other day:
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
>>
>> with your patch (now deleted from this thread) applied.
>>
>>
>> ...[cut]...
>>
>>>> Anyway, your results are somewhat counter-intuitive.
>>
>>>> Would it be possible to run this workload with the linux-next branch
>>>> and the schedutil governor and see if the patch at
>>>> https://patchwork.kernel.org/patch/9671829/ makes any difference?
>>
>> git://git.kernel.org/pub/scm/linux/kernel/git/rafael/linux-pm.git linux-next
>> Plus that patch is in progress.
>
> 3387.76 Seconds.
> Idle power 3.85 watts.
>
> Other potentially interesting information for 2 hour idle test:
> Driver called 21209 times. Maximum duration 2396 Seconds. Minimum duration 20 mSec.
> Histogram of target pstates:
> 16 8
> 17 3149
> 18 1436
> 19 1479
> 20 196
> 21 2
> 22 3087
> 23 375
> 24 22
> 25 4
> 26 2
> 27 3736
> 28 2177
> 29 13
> 30 0
> 31 0
> 32 2
> 33 0
> 34 1533
> 35 246
> 36 0
> 37 4
> 38 3738
>
> Compared to kernel 4.11-rc7 (passive mode, schedutil governor)
> 3297.82 (re-stated from a previous e-mail)
> Idle power 3.81 watts

All right, so it looks like the patch makes the workload run longer
and also use more energy.

Using more energy is quite as expected, but slowing thing down isn't,
as the patch aggregates the updates that would have been discarded by
taking the maximum utilization over them, which should result in
higher frequencies being used too.  It may be due to the increased
governor overhead, however.

> Other potentially interesting information for 2 hour idle test:
> Driver called 1631 times. Maximum duration 2510 Seconds. Minimum duration 0.587 mSec.
> Histogram of target pstates (missing lines mean 0 occurrences):
> 16 813
> 24 2
> 38 816

Thanks for the data!

Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ