lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Thu, 19 Jan 2017 22:43:23 +0100
From:   "Rafael J. Wysocki" <rafael@...nel.org>
To:     Daniel Lezcano <daniel.lezcano@...aro.org>
Cc:     Alex Shi <alex.shi@...aro.org>,
        Greg Kroah-Hartman <gregkh@...uxfoundation.org>,
        "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Linux PM <linux-pm@...r.kernel.org>,
        Linux Kernel Mailing List <linux-kernel@...r.kernel.org>,
        Ulf Hansson <ulf.hansson@...aro.org>,
        Rasmus Villemoes <linux@...musvillemoes.dk>,
        Arjan van de Ven <arjan@...ux.intel.com>,
        Rik van Riel <riel@...hat.com>
Subject: Re: [PATCH 3/3] cpuidle/menu: add per cpu pm_qos_resume_latency consideration

On Thu, Jan 19, 2017 at 11:21 AM, Daniel Lezcano
<daniel.lezcano@...aro.org> wrote:
> On Thu, Jan 19, 2017 at 05:25:37PM +0800, Alex Shi wrote:
>>
>> > That said, I have the feeling that is taking the wrong direction. Each time we
>> > are entering idle, we check the latencies. Entering idle can be done thousand
>> > of times per second. Wouldn't make sense to disable the states not fulfilling
>> > the constraints at the moment the latencies are changed ? As the idle states
>> > have increasing exit latencies, setting an idle state limit to disable all
>> > states after that limit may be more efficient than checking again and again in
>> > the idle path, no ?
>>
>> You'r right. save some checking is good thing to do.
>
> Hi Alex,
>
> I think you missed the point.
>
> What I am proposing is to change the current approach by disabling all the
> states after a specific latency.
>
> We add a specific internal function:
>
> static int cpuidle_set_latency(struct cpuidle_driver *drv,
>                                 struct cpuidle_device *dev,
>                                 int latency)
> {
>         int i, idx;
>
>         for (i = 0, idx = 0; i < drv->state_count; i++) {
>
>                 struct cpuidle_state *s = &drv->states[i];
>
>                 if (s->latency > latency)
>                         break;
>
>                 idx = i;
>         }
>
>         dev->state_count = idx;
>
>         return 0;
> }
>
> This function is called from the notifier callback:
>
> static int cpuidle_latency_notify(struct notifier_block *b,
>                 unsigned long l, void *v)
>  {
> -       wake_up_all_idle_cpus();
> +       struct cpuidle_device *dev;
> +       struct cpuidle_driver *drv;
> +
> +       cpuidle_pause_and_lock();
> +       for_each_possible_cpu(cpu) {
> +               dev = &per_cpu(cpuidle_dev, cpu);
> +               drv = = cpuidle_get_cpu_driver(dev);
> +               cpuidle_set_latency(drv, dev, l)
> +       }
> +       cpuidle_resume_and_unlock();
> +
>         return NOTIFY_OK;
>  }

The above may be problematic if the constraints change relatively
often.  It is global and it will affect all of the CPUs in the system
every time and now think about systems with hundreds of them.

Thanks,
Rafael

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ