lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <bc8b6dd4-7a5e-7f58-ac24-04d256007b17@linux.intel.com>
Date:   Mon, 17 Jul 2017 12:51:44 -0700
From:   Arjan van de Ven <arjan@...ux.intel.com>
To:     Thomas Gleixner <tglx@...utronix.de>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Andi Kleen <ak@...ux.intel.com>,
        "Li, Aubrey" <aubrey.li@...ux.intel.com>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Christoph Lameter <cl@...ux.com>,
        Aubrey Li <aubrey.li@...el.com>, len.brown@...el.com,
        rjw@...ysocki.net, tim.c.chen@...ux.intel.com,
        paulmck@...ux.vnet.ibm.com, yang.zhang.wz@...il.com,
        x86@...nel.org, linux-kernel@...r.kernel.org,
        daniel.lezcano@...aro.org
Subject: Re: [RFC PATCH v1 00/11] Create fast idle path for short idle periods

On 7/17/2017 12:46 PM, Thomas Gleixner wrote:
> On Mon, 17 Jul 2017, Arjan van de Ven wrote:
>> On 7/17/2017 12:23 PM, Peter Zijlstra wrote:
>>> Now I think the problem is that the current predictor goes for an
>>> average idle duration. This means that we, on average, get it wrong 50%
>>> of the time. For performance that's bad.
>>
>> that's not really what it does; it looks at next tick
>> and then discounts that based on history;
>> (with different discounts for different order of magnitude)
>
> next tick is the worst thing to look at for interrupt heavy workloads as

well it was better than what was there before (without discount and without detecting
repeated patterns)

> the next tick (as computed by the nohz code) can be far away, while the I/O
> interrupts come in at a high frequency.
>
> That's where Daniel Lezcanos work of predicting interrupts comes in and
> that's the right solution to the problem. The core infrastructure has been
> merged, just the idle/cpufreq users are not there yet. All you need to do
> is to select CONFIG_IRQ_TIMINGS and use the statistics generated there.
>

yes ;-)

also note that the predictor does not need to perfect, on most systems C states are
an order of magnitude apart in terms of power/performance/latency so if you get the general
order of magnitude right the predictor is doing its job.

(this is not universally true, but physics of power gating/etc tend to drive to this conclusion;
the cost of implementing an extra state very close to another state means that the HW folks are unlikely
to do the less power saving state of the two to save their cost and testing effort)

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ