lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <870e5cb6-bb3b-7d51-93b3-db4928f700b4@evidence.eu.com>
Date:   Wed, 28 Feb 2018 12:15:56 +0100
From:   Claudio Scordino <claudio@...dence.eu.com>
To:     "Rafael J . Wysocki" <rafael.j.wysocki@...el.com>,
        Viresh Kumar <viresh.kumar@...aro.org>
Cc:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...hat.com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        Dietmar Eggemann <dietmar.eggemann@....com>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Juri Lelli <juri.lelli@...hat.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Todd Kjos <tkjos@...roid.com>,
        Joel Fernandes <joelaf@...gle.com>, linux-pm@...r.kernel.org,
        linux-kernel@...r.kernel.org
Subject: Re: [PATCH v2] cpufreq: schedutil: rate limits for SCHED_DEADLINE

Dear Rafael, dear Viresh,

Il 28/02/2018 12:06, Claudio Scordino ha scritto:
> When the SCHED_DEADLINE scheduling class increases the CPU utilization,
> we should not wait for the rate limit, otherwise we may miss some
> deadline.
> 
> Tests using rt-app on Exynos5422 with up to 10 SCHED_DEADLINE tasks have
> shown reductions of even 10% of deadline misses with a negligible
> increase of energy consumption (measured through Baylibre Cape).

As a follow up of the previous thread, I've put some figures here: https://gist.github.com/claudioscordino/d4a10e8b3ceac419fb0c8b552db19806

In some cases, I've noticed the patch to even reduce the energy consumption (due to a mix of factors plus DL tasks entering the inactive state sooner).

I've also tried to create the "ramp-up" scenario by allocating 10 DL tasks on the same core, but it didn't produce any significant increase of consumption.

IMHO, the overall behavior looks better.

Best regards,

              Claudio

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ