lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 23 May 2016 11:58:42 +0100
From:	Morten Rasmussen <morten.rasmussen@....com>
To:	peterz@...radead.org, mingo@...hat.com
Cc:	dietmar.eggemann@....com, yuyang.du@...el.com,
	vincent.guittot@...aro.org, mgalbraith@...e.de,
	linux-kernel@...r.kernel.org,
	Morten Rasmussen <morten.rasmussen@....com>
Subject: [PATCH 00/16] sched: Clean-ups and asymmetric cpu capacity support

Hi,

The scheduler is currently not doing much to help performance on systems with
asymmetric compute capacities (read ARM big.LITTLE). This series improves the
situation with a few tweaks mainly to the task wake-up path that considers
compute capacity at wake-up and not just whether a cpu is idle for these
systems. This gives us consistent, and potentially higher, throughput in
partially utilized scenarious. SMP behaviour and performance should be
unaffected.

Test 0:
	for i in `seq 1 10`; \
	       do sysbench --test=cpu --max-time=3 --num-threads=1 run; \
	       done \
	| awk '{if ($4=="events:") {print $5; sum +=$5; runs +=1}} \
	       END {print "Average events: " sum/runs}'

Target: ARM TC2 (2xA15+3xA7)

	(Higher is better)
tip:	Average events: 150.2
patch:	Average events: 217.9

Test 1:
	perf stat --null --repeat 10 -- \
	perf bench sched messaging -g 50 -l 5000

Target: Intel IVB-EP (2*10*2)

tip:	4.831538935 seconds time elapsed ( +-  1.58% )
patch:	4.839951382 seconds time elapsed ( +-  1.01% )

Target: ARM TC2 A7-only (3xA7) (-l 1000)

tip:	61.406552538 seconds time elapsed ( +-  0.12% )
patch:	61.589263159 seconds time elapsed ( +-  0.22% )

Active migration of tasks away from small capacity cpus isn't addressed
in this set although it is necessary for consistent throughput in other
scenarios on asymmetric cpu capacity systems.

Patch   1-4: Generic fixes and clean-ups.
Patch  5-13: Improve capacity awareness.
Patch 14-16: Arch features for arm to enable asymmetric capacity support.

Dietmar Eggemann (1):
  sched: Store maximum per-cpu capacity in root domain

Morten Rasmussen (15):
  sched: Fix power to capacity renaming in comment
  sched/fair: Consistent use of prev_cpu in wakeup path
  sched/fair: Disregard idle task wakee_flips in wake_wide
  sched/fair: Optimize find_idlest_cpu() when there is no choice
  sched: Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag
  sched: Disable WAKE_AFFINE for asymmetric configurations
  sched: Make SD_BALANCE_WAKE a topology flag
  sched/fair: Let asymmetric cpu configurations balance at wake-up
  sched/fair: Compute task/cpu utilization at wake-up more correctly
  sched/fair: Consider spare capacity in find_idlest_group()
  sched: Add per-cpu max capacity to sched_group_capacity
  sched/fair: Avoid pulling tasks from non-overloaded higher capacity
    groups
  arm: Set SD_ASYM_CPUCAPACITY for big.LITTLE platforms
  arm: Set SD_BALANCE_WAKE flag for asymmetric capacity systems
  arm: Update arch_scale_cpu_capacity() to reflect change to define

 arch/arm/include/asm/topology.h |   5 +
 arch/arm/kernel/topology.c      |  25 ++++-
 include/linux/sched.h           |   3 +-
 kernel/sched/core.c             |  25 ++++-
 kernel/sched/fair.c             | 217 ++++++++++++++++++++++++++++++++++++----
 kernel/sched/sched.h            |   5 +-
 6 files changed, 250 insertions(+), 30 deletions(-)

-- 
1.9.1

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ