lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Fri, 12 Feb 2010 20:38:48 +0000
From: "Thor (Hammer of God)" <Thor@...merofgod.com>
To: "craig.wright@...ormation-Defense.com"
	<craig.wright@...ormation-Defense.com>, "Valdis.Kletnieks@...edu"
	<Valdis.Kletnieks@...edu>, 'Christian Sciberras' <uuf6429@...il.com>
Cc: "'McGhee, Eddie'" <Eddie.McGhee@....com>,
	'full-disclosure' <full-disclosure@...ts.grok.org.uk>,
	"security-basics@...urityfocus.com" <security-basics@...urityfocus.com>
Subject: Re: Risk measurements

OK, coconuts and flames aside, a serious question then...  You know, in case I'm really missing something here...

Let's move past the "probability of system compromise" in the sense of any one system and look to what I think you are really getting at, which is "of all the systems we have, some will get compromised, and here is how many" - is THAT what you guys are getting at?  If so, I've got some questions:

It would seem to me that this would only be applicable to massive installations where they've got heterogeneous OSs, huge software bases, clients all over the place and multiple paths in and out to who knows were.   If that were the case, and we were going to base this on historical "performance" of OSs and software already deployed, wouldn't this type of company ALREAY HAVE actual, historical information on compromise as it refers to them?   Even if you did come up with some model as it applied globally, wouldn't a company's internally gathered statistics be more valid?  Further, even if your stats were absolutely valid, wouldn't a company look to their own stats first to see if your stats matched?  It would seem that they would only embrace your stats as valid if they exactly matched what they already knew, since it is based on actual history and not future probability.  Again, even if your stats were based on some new genius method you came up with (not being sarcastic, either) would it not be destined to be compared with something a company already knew?  And if so, since your figures matched their figures, their figures must also be valid, and thus would show that they don't need yours?

To me, the only way anyone would look to such statistics would be to see if they could get away with spending less to fix something in the future than what they spent in the past using your method as a basis for that decision.  If your stats show a higher future probability of compromise than what they've had in the past, there is no way they will spend more money up front to potentially fix something than what they've had to spend in the past to get it fixed.   So it seems that you would only have 1 shot at being right, but a million shots at being wrong.  And if you ever were wrong, they would blame you.   It seems like a "one-off potential win" scenario to me.

t



From: Craig S. Wright [mailto:craig.wright@...ormation-Defense.com]
Sent: Friday, February 12, 2010 11:40 AM
To: Valdis.Kletnieks@...edu; 'Christian Sciberras'
Cc: 'McGhee, Eddie'; 'full-disclosure'; security-basics@...urityfocus.com; Thor (Hammer of God)
Subject: RE: [Full-disclosure] Risk measurements


Exactly,

As Valdis has stated, we want economic optimality. Valdis has stated this in a far easier to understand manner than I.

I will publish a financial model on the blog this weekend that displays the relationships graphically.

Regards,

...

Dr. Craig S Wright<http://gse-compliance.blogspot.com/> GSE-Malware, GSE-Compliance, LLM, & ...

Information Defense<http://www.information-defense.com/> Pty Ltd

_____________________________________________
From: Valdis.Kletnieks@...edu [mailto:Valdis.Kletnieks@...edu]
Sent: Friday, 12 February 2010 11:31 PM
To: Christian Sciberras
Cc: craig.wright@...ormation-defense.com; McGhee, Eddie; full-disclosure; security-basics@...urityfocus.com; Thor (Hammer of God)
Subject: Re: [Full-disclosure] Risk measurements

* PGP Signed by an unknown key

On Fri, 12 Feb 2010 13:09:55 +0100, Christian Sciberras said:

> There's a time for finding fancy interesting numbers and a time to get

> the system going with the least flaws possible.

You don't want "the least flaws possible".  We can get very close to zero

flaws per thousand lines of code - but the result ends up costing hundreds

of dollars per line.  You want "the most economical number of flaws" - if

you get it down to 10 flaws, and the next flaw will cost you $750,000 to fix,

but you estimate your loss as $500,000 if you don't fix it and get hacked,

why are you spending $250,000 extra to fix the flaw?

> Why should any entity bother with risk modeling if it is not used at all?

> Here's the real question to the subject; What does risk modeling fix?

Risk modeling is what tells you the flaw will cost $500K to not fix.

And since you totally screw the pooch if you got it wrong and not fixing

it costs $1M, people like to do a good job of risk modelling.

* Unknown Key

* 0xB4D3D7B0

Content of type "text/html" skipped

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ