lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-Id: <20190527062116.11512-1-dietmar.eggemann@arm.com>
Date:   Mon, 27 May 2019 07:21:09 +0100
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Peter Zijlstra <peterz@...radead.org>,
        Ingo Molnar <mingo@...nel.org>
Cc:     Thomas Gleixner <tglx@...utronix.de>,
        Frederic Weisbecker <fweisbec@...il.com>,
        Rik van Riel <riel@...riel.com>,
        Vincent Guittot <vincent.guittot@...aro.org>,
        Morten Rasmussen <morten.rasmussen@....com>,
        Quentin Perret <quentin.perret@....com>,
        Valentin Schneider <valentin.schneider@....com>,
        Patrick Bellasi <patrick.bellasi@....com>,
        linux-kernel@...r.kernel.org
Subject: [PATCH 0/7] sched: Remove per rq load array

Since commit fdf5f315d5cf "sched/fair: Disable LB_BIAS by default"
(v4.20) the scheduler feature LB_BIAS is disabled, i.e. the scheduler
has been only using rq->cpu_load[0] for the cpu load values since then.

Tests back then (result are listed in the header of the patch mentioned
above) haven't shown any regressions and people haven't complained about
any related problems in the meantime (v4.20 - v5.1).

The following patches remove all the functionality which is not needed
anymore:

(1) Per rq load array update code
(2) CFS' source_load() and target_load() used for conservative load
    balancing which can be directly replaced by weighted_cpuload()
(3) Per rq load array (rq->cpu_load[])
(4) Sched domain per rq load indexes (sd->*_idx) since there is no
    other user for it
(5) sum_weighted_load of sched group load balance stats
    because it's now identical with the actual sched group load

Dietmar Eggemann (7):
  sched: Remove rq->cpu_load[] update code
  sched/fair: Replace source_load() & target_load() w/
    weighted_cpuload()
  sched/debug: Remove sd->*_idx range on sysctl
  sched: Remove rq->cpu_load[]
  sched: Remove sd->*_idx
  sched/fair: Remove sgs->sum_weighted_load
  sched/fair: Rename weighted_cpuload() to cpu_load()

 include/linux/sched/nohz.h     |   8 -
 include/linux/sched/topology.h |   5 -
 kernel/sched/core.c            |   7 +-
 kernel/sched/debug.c           |  41 +---
 kernel/sched/fair.c            | 385 ++-------------------------------
 kernel/sched/features.h        |   1 -
 kernel/sched/sched.h           |   8 -
 kernel/sched/topology.c        |  10 -
 kernel/time/tick-sched.c       |   2 -
 9 files changed, 33 insertions(+), 434 deletions(-)

-- 
2.17.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ