lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date:	Mon, 19 Mar 2007 09:06:36 -0600
From:	"Chris Friesen" <cfriesen@...tel.com>
To:	Mark Hahn <hahn@...aster.ca>
CC:	Con Kolivas <kernel@...ivas.org>, linux-kernel@...r.kernel.org
Subject: Re: RSDL v0.31


Just so you know the context, I'm coming at this from the point of view 
of an embedded call server designer.

Mark Hahn wrote:
> why do you think fairness is good, especially always good?

Fairness is good because it promotes predictability.  See the 
"deterministic" section below.

> even starvation is sometimes a good thing - there's a place for processes
> that only use the CPU if it is otherwise idle.  that is, they are
> deliberately starved all the rest of the time.

If you have nice 19 be sufficiently low priority, then the difference 
between "using cpu if otherwise idle" and "gets a little bit of cpu even 
if not totally idle" is unimportant.

Starvation is a very *bad* thing when you don't want it.


>> Much lower and bound latencies

> in an average sense?  also, under what circumstances does this actually
> matter?  (please don't offer something like RT audio on an overloaded 
> machine- that's operator error, not something to design for.)

In my environment, latency *matters*.  If a packet doesn't get processed 
in time, you drop it.  With mainline it can be quite tricky to tune the 
latency, especially when you don't want to resort to soft realtime 
because you don't entirely trust the code thats running (because it came 
from a third party vendor).


>> Deterministic

> not a bad thing, but how does this make itself apparent and of value to 
> the user?  I think everyone is extremely comfortable with non-determinism
> (stemming from networks, caches, interleaved workloads, etc)

Determinism is really important.  It almost doesn't matter what the 
behaviour is, as long as we can predict it.  We model the system to 
determine how to tweak the system (niceness, sched policy, etc.), as 
well as what performance numbers we can advertise.  If the system is 
non-deterministic, it makes this modelling extremely difficult--you end 
up having to give up significant performance due to worst-case spikes.

If the system is deterministic, it makes it much easier to predict its 
actions.

Chris
-
To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
the body of a message to majordomo@...r.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Please read the FAQ at  http://www.tux.org/lkml/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ