lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Wed, 15 Jul 2009 17:19:52 -0400
From:	Ted Baker <baker@...fsu.edu>
To:	"James H. Anderson" <anderson@...unc.edu>
Cc:	Peter Zijlstra <a.p.zijlstra@...llo.nl>,
	Chris Friesen <cfriesen@...tel.com>,
	Raistlin <raistlin@...ux.it>,
	Douglas Niehaus <niehaus@...c.ku.edu>,
	Henrik Austad <henrik@...tad.us>,
	LKML <linux-kernel@...r.kernel.org>, Ingo Molnar <mingo@...e.hu>,
	Bill Huey <billh@...ppy.monkey.org>,
	Linux RT <linux-rt-users@...r.kernel.org>,
	Fabio Checconi <fabio@...dalf.sssup.it>,
	Thomas Gleixner <tglx@...utronix.de>,
	Dhaval Giani <dhaval.giani@...il.com>,
	Noah Watkins <jayhawk@....ucsc.edu>,
	KUSP Google Group <kusp@...glegroups.com>,
	Tommaso Cucinotta <cucinotta@...up.it>,
	Giuseppe Lipari <lipari@...is.sssup.it>,
	Bjoern Brandenburg <bbb@...unc.edu>
Subject: Re: RFC for a new Scheduling policy/class in the Linux-kernel

On Tue, Jul 14, 2009 at 01:16:52PM -0400, James H. Anderson wrote:
> ... BTW, I should say that I am not
> familiar with the PEP protocol that has been discussed in this thread.
> I assume it doesn't work under GEDF, or you wouldn't have asked the
> question...

I have not seen the definition of PEP, but from the context of
this discussion I infer that it refers to an implementation of
priority inheritance.  As such, with pretty much any global
scheduling policy, the set of other tasks whose critical sections
could stack up is bounded only by the number of tasks in the
system.

In any case, I have misunderstood what PEP is, let me attempt
to summarize what I have inferred:

A high priority running task that would otherwise become blocked
waiting for a lower-priority lock-holding task to release the lock
can give up its prority/shot at execution to the lower-priority
task that is blocking it.  That is, when a task A is "blocked" for
a lock it can stay in the run-queue so long as the task B that is
(ultimately, transitively) blocking it is in (the same?)
run-queue.  At any point where the scheduler would choose to
execute A it instead finds B, by traversing wait-for links between
tasks, and executes B.  The priority order of the run-queue can be
based on any (partial) ordering relation, including deadlines.

A slight complexity is that if B is allowed to suspend itself
while holding a lock, and does so, one must run back and also
remove the tasks like A from the run-queue, and when B wakes up,
one must do the revers.  However, the frequency of deep nesting
of wait-for relationships seems small.

For GEDF on SMP, a question is how to handle the case where A is
blocked on one processor and B may be running on a different one.
This seems to require removing A from the run-queue when it is
detected.

Of course, the current Linux model appears not to fully support
GEDF, since run-queues are per-processor, subject to explicit
migration.  So, as infer from the preceding messages, the question
above transforms into whether to migrate A to B's processor
run-queue or to migrate B to A's processor run-queue?

Ted
--
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ