lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Tue, 03 Aug 2010 23:52:35 -0400
From:	Andrea Bastoni <bastoni@...unc.edu>
To:	Peter Zijlstra <peterz@...radead.org>
CC:	Bjoern Brandenburg <bbb@...il.unc.edu>,
	Raistlin <raistlin@...ux.it>,
	linux-kernel <linux-kernel@...r.kernel.org>,
	Song Yuan <song.yuan@...csson.com>,
	Dmitry Adamushko <dmitry.adamushko@...il.com>,
	Thomas Gleixner <tglx@...utronix.de>,
	Nicola Manica <nicola.manica@...i.unitn.it>,
	Luca Abeni <lucabe72@...il.it>,
	Claudio Scordino <claudio@...dence.eu.com>,
	Harald Gustafsson <harald.gustafsson@...csson.com>,
	Giuseppe Lipari <lipari@...is.sssup.it>
Subject: Re: periods and deadlines in SCHED_DEADLINE

On 08/03/2010 05:41 AM, Peter Zijlstra wrote:
> On Sun, 2010-07-11 at 08:42 +0200, Bjoern Brandenburg wrote:
>>
>> If you want to do G-EDF with limited and different budgets on each CPU
>> (e.g., G-EDF tasks may only run for 100 out of 1000 ms on CPU 0, but
>> for 400 out of 1000 ms on CPU 1), then you are entering the domain of
>> restricted-supply scheduling, which is significantly more complicated
>> to analyze (see [1,2]). 
> 
> Without having looked at the refs, won't the soft case still have
> bounded tardiness? Since the boundedness property mostly depends on
> u<=1, that is, as long as we can always run everything within the
> available time we won't start drifting.

Yes, the soft case will still have bounded tardiness (see [2]), although the reason is more
related to the fact that priorities are defined by deadlines, than to U<=1.

Anyway, both hard and soft real-time cases become very difficult to analyze if limited/different
budgets are allowed on each CPU.

>> As far as I know there is no exiting analysis for "almost G-EDF",
>> i.e.,  the case where each task may only migrate among a subset of the
>> processors (= affinity masks), except for the special case of
>> clustered EDF (C-EDF), wherein the subsets of processors are
>> non-overlapping. 
> 
> Right, affinity masks are a pain, hence I proposed to limit that to
> either 1 cpu (yielding fully paritioned) or the full cluster.

I'm not sure I get what you mean by "full cluster". With G-EDF-like scheduling policies it only
makes sense to cluster cores around some memory level (cache Lx, NUMA node...), as the idea is
to reduce the cost of a task migration among cores. Depending on the workload, a higher (lower)
level of clustering may perform better.

A "full cluster" therefore should be created around some memory level. But if a socket has, for
example, two level of caches (L2 + L3) and a "full cluster" forces to select all cores in the
socket (first hierarchy level in cpusets), we miss the possibility to cluster the cores that
shares the L2 (and this configuration may lead better performance). In these clusters the
processors are _non-overlapping_.

Instead, if you want to use the cpuset + affinity to define possibly _overlapping_ clusters (or
containers, or servers) to support different budgets on each CPU (something similar to cgroup,
see [1,3]), forcing only two configuration (single cpu/full cluster) may be restrictive.

Thanks,

- Andrea

[3] H. Leontyev and J. Anderson, " A Hierarchical Multiprocessor Bandwidth Reservation Scheme
with Timing Guarantees",  Real-Time Systems, Volume 43, Number 1, pp. 60-92, September 2009.
http://www.cs.unc.edu/~anderson/papers/rtj09b.pdf

-- 
Andrea Bastoni
Visiting Ph.D. Student
Dept. of Computer Science
University of North Carolina at Chapel Hill
http://www.sprg.uniroma2.it/home/bastoni/
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ