lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:   Tue, 22 Nov 2016 09:55:40 -0800
From:   Andres Oportus <andresoportus@...gle.com>
To:     linux-kernel@...r.kernel.org
Subject: Re: [PATCH 0/6 v6] sched: reflect sched_entity move into task_group's load

On Tue, Nov 22, 2016 at 9:52 AM, Andres Oportus
<andresoportus@...gle.com> wrote:
>
> I have back-ported these patches into Android with 4.4 kernel and tested
> them on a Hikey device [1].  Back-ported patches used are enclosed.
> I see equal or better performance across all tests.
>
> PELT below refers to https://android.googlesource.com/kernel/hikey-linaro at
> android-hikey-linaro-4.4 caae63389787850e633eef67526443729c18451c
> and WALT disabled (Android with PELT).
> PELT+fixes refers to PELT + Vincent's patches back-ported.
>
> These are tests run using the Vellamo app [2], higher scores are better:
>
> Test                   PELT      PELT+fixes   Delta
> MT Linpack native      126       131           3.97%
> MT Linpack java        173       175           1.16%
> MT Stream 5.10         158       158           0.00%
> Membench               212       214           0.94%
> Sysbench               174       196          12.64%
> Parsec                 202       207           2.48%
>
> These are "Jankbench" tests, real world use cases of special interest to
> Android.  Xth frame percentile indicates X % of frames rendered in Y ms
> or less. Lower is better.
>
> Test                            PELT    PELT+fixes  Delta
> listview scroll 90%             13.87   13.66       -1.54%
> listview scroll 95%             14.80   14.58       -1.51%
> listview scroll 99%             17.43   17.00       -2.53%
> image listview scroll 90%       14.17   14.04       -0.93%
> image listview scroll 95%       15.13   14.90       -1.54%
> image listview scroll 99%       17.43   17.08       -2.05%
> shadow grid 90%                 16.40   16.28       -0.74%
> shadow grid 95%                 17.23   17.06       -1.00%
> shadow grid 99%                 23.07   23.44        1.58%
> hi-hitrate text render 90%      28.77   28.62       -0.52%
> hi-hitrate text render 95%      31.60   31.48       -0.38%
> hi-hitrate text render 96%      41.53   41.10       -1.05%
> low-hitrate text render 90%     28.97   28.66       -1.08%
> low-hitrate text render 95%     32.10   31.66       -1.39%
> low-hitrate text render 99%     41.70   41.38       -0.77%
>
> Andres
>
> [1] http://www.96boards.org/product/hikey
> [2] https://play.google.com/store/apps/details?id=com.quicinc.vellamo
>
>
>> From: Vincent Guittot <vincent.guittot@...aro.org>
>> Date: Tue, Nov 8, 2016 at 12:26 AM
>> Subject: [PATCH 0/6 v6] sched: reflect sched_entity move into task_group's load
>> To: peterz@...radead.org, mingo@...nel.org,
>> linux-kernel@...r.kernel.org, dietmar.eggemann@....com
>> Cc: yuyang.du@...el.com, Morten.Rasmussen@....com, pjt@...gle.com,
>> bsegall@...gle.com, kernellwp@...il.com, Vincent Guittot
>> <vincent.guittot@...aro.org>
>>
>>
>> Ensure that the move of a sched_entity will be reflected in load and
>> utilization of the task_group hierarchy.
>>
>> When a sched_entity moves between groups or CPUs, load and utilization
>> of cfs_rq don't reflect the changes immediately but converge to new values.
>> As a result, the metrics are no more aligned with the new balance of the
>> load in the system and next decisions will have a biased view.
>>
>> This patchset synchronizes load/utilization of sched_entity with its child
>> cfs_rq (se->my-q) only when tasks move to/from child cfs_rq:
>> -move between task group
>> -migration between CPUs
>> Otherwise, PELT is updated as usual.
>>
>> This version doesn't include any changes related to discussion that have
>> started during the review of the previous version about:
>> - encapsulate the sequence for changing the property of a task
>> - remove a cfs_rq from list during update_blocked_averages
>> These topics don't gain anything from being added in this patchset as they
>> are fairly independent and deserve a separate patch.
>>
>> Changes since v5:
>> - factorize detach entity like for attach
>> - fix add_positive
>> - Fixed few coding style
>>
>> Changes since v4:
>> - minor typo and commit message changes
>> - move call to cfs_rq_clock_task(cfs_rq) in post_init_entity_util_avg
>>
>> Changes since v3:
>> - Replaced the 2 arguments of update_load_avg by 1 flags argument
>> - Propagated move in runnable_load_avg when sched_entity is already on_rq
>> - Ensure that intermediate value will not reach memory when updating load and
>>   utilization
>> - Optimize the the calculation of load_avg of the sched_entity
>> - Fixed some typo
>>
>> Changes since v2:
>> - Propagate both utilization and load
>> - Synced sched_entity and se->my_q instead of adding the delta
>>
>> Changes since v1:
>> - This patch needs the patch that fixes issue with rq->leaf_cfs_rq_list
>>   "sched: fix hierarchical order in rq->leaf_cfs_rq_list" in order to work
>>   correctly. I haven't sent them as a single patchset because the fix is
>>   independent of this one
>> - Merge some functions that are always used together
>> - During update of blocked load, ensure that the sched_entity is synced
>>   with the cfs_rq applying changes
>> - Fix an issue when task changes its cpu affinity
>>
>> Vincent Guittot (6):
>>   sched: factorize attach/detach entity
>>   sched: fix hierarchical order in rq->leaf_cfs_rq_list
>>   sched: factorize PELT update
>>   sched: propagate load during synchronous attach/detach
>>   sched: propagate asynchrous detach
>>   sched: fix task group initialization
>>
>>  kernel/sched/core.c  |   1 +
>>  kernel/sched/fair.c  | 393 ++++++++++++++++++++++++++++++++++++++++-----------
>>  kernel/sched/sched.h |   2 +
>>  3 files changed, 316 insertions(+), 80 deletions(-)
>>
>> --
>> 2.7.4
>
>

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ