lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:   Mon, 2 Sep 2019 12:53:48 +0200
From:   Dietmar Eggemann <dietmar.eggemann@....com>
To:     Rik van Riel <riel@...riel.com>, linux-kernel@...r.kernel.org
Cc:     kernel-team@...com, pjt@...gle.com, peterz@...radead.org,
        mingo@...hat.com, morten.rasmussen@....com, tglx@...utronix.de,
        mgorman@...hsingularity.net, vincent.guittot@...aro.org
Subject: Re: [PATCH RFC v4 0/15] sched,fair: flatten CPU controller runqueues

On 22/08/2019 04:17, Rik van Riel wrote:
> The current implementation of the CPU controller uses hierarchical
> runqueues, where on wakeup a task is enqueued on its group's runqueue,
> the group is enqueued on the runqueue of the group above it, etc.
> 
> This increases a fairly large amount of overhead for workloads that
> do a lot of wakeups a second, especially given that the default systemd
> hierarchy is 2 or 3 levels deep.
> 
> This patch series is an attempt at reducing that overhead, by placing
> all the tasks on the same runqueue, and scaling the task priority by
> the priority of the group, which is calculated periodically.
> 
> My main TODO items for the next period of time are likely going to
> be testing, testing, and testing. I hope to find and flush out any
> corner case I can find, and make sure performance does not regress
> with any workloads, and hopefully improves some.

I did some testing with a small & simple rt-app based test-case:

2 CPUs (rq->cpu_capacity_orig=1024), CPUfreq performance governor

2 taskgroups /tg0 and /tg1

6 CFS tasks (periodic, 8/16ms (runtime/period))

/tg0 (cpu.shares=1024) ran 4 tasks and /tg1 (cpu.shares=1024) ran 2 tasks

(arm64 defconfig with !CONFIG_NUMA_BALANCING, !CONFIG_SCHED_AUTOGROUP)

---

v5.2:

The 2 /tg1 tasks ran 8/16ms. The 4 /tg0 tasks ran 4/16ms in the
beginning and then 8/16ms after the 2 /tg1 tasks did finish.

---

v5.2 + v4:

There is no runtime/period pattern visible anymore. I see a lot of extra
wakeup latency for those tasks though.

v5.2 + (v4 without 07/15, 08/15, 15/15) didn't change much.

---

I could try to reduce the stack even further (e.g. without 13/15).

IMHO it's a good idea to have a set of these small & simple test-cases
handy to verify that the base-functionality is still in place. This
might be hard to achieve with benchmarks.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ