lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Fri, 29 Jan 2016 16:06:05 +0100
From:	Peter Zijlstra <peterz@...radead.org>
To:	Luca Abeni <luca.abeni@...tn.it>
Cc:	linux-kernel@...r.kernel.org, Ingo Molnar <mingo@...hat.com>,
	Juri Lelli <juri.lelli@....com>
Subject: Re: [RFC 5/8] Track the "total rq utilisation" too

On Fri, Jan 15, 2016 at 10:15:11AM +0100, Luca Abeni wrote:

> There is also a newer paper, that will be published at ACM SAC 2016
> (so, it is not available yet), but is based on this technical report:
> http://arxiv.org/abs/1512.01984
> This second paper describes some more complex algorithms (easily
> implementable over this patchset) that are able to guarantee hard
> schedulability for SCHED_DEADLINE tasks with reclaiming on SMP.

So I finally got around to reading the relevant sections of that paper
(5.1 and 5.2).

The paper introduces two alternatives;

 - parallel reclaim (5.1)
 - sequential reclaim (5.2)

The parent patch introduces the accounting required for sequential
reclaiming IIUC.

Thinking about things however, I think I would prefer parallel reclaim
over sequential reclaim. The problem I see with sequential reclaim is
that under light load jobs might land on different CPUs and not benefit
from reclaim (as much) since the 'spare' bandwidth is stuck on other
CPUs.

Now I suppose the exact conditions to hit that worst case might be quite
hard to trigger, in which case it might just not matter in practical
terms.

But maybe I'm mistaken, the paper doesn't seem to compare the two
approaches in this way.

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ