lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Sat, 13 Feb 2010 08:11:37 +1100
From: "Craig S Wright" <craig.wright@...ormation-defense.com>
To: "'Thor \(Hammer of God\)'" <Thor@...merofgod.com>,
	<Valdis.Kletnieks@...edu>, "'Christian Sciberras'" <uuf6429@...il.com>
Cc: "'McGhee, Eddie'" <Eddie.McGhee@....com>,
	'full-disclosure' <full-disclosure@...ts.grok.org.uk>,
	security-basics@...urityfocus.com
Subject: Re: Risk measurements

Tim,
Most companies, even the large ones do not have good models. They have data,
but data is not useful in itself. Most rely on mean value calculations and
little more. Also, they fail to account for heterogeneity in the data, this
is unequal variances. To put it simply as I can, the standard deviations of
the data are unequal to a degree that simple analysis techniques such as
ANOVA are ineffective (these are non-robust). Hence why I spent 4 years
learning non-parametric statistical methods and robust calculation.

The modelling I am working on uses a combination of network graph theory and
heteroscadestic analysis. 

Any time that a company wants to chose between multiple design alternatives,
they could compare the options. Even in small organisations, there are
constant choices and projects. This can even be for the stop-go analysis of
existing tools and products. An existing IDS is a sunk cost. The ongoing
costs are what needs to be compared, not what has been spent in the past.
Metrics help with this form of decision.

The inputs into the models I have used come from many sources. I have used
the CIS frameworks and modelled systems based on each variable in each of
the system documents from CIS (Centre for Internet Security). These have
been classified using RF (random forest classifiers). I have also selected
these so most of the process can be automated to a high degree. Network
paths need to be manually calculated at present, but there is nothing
stopping even this from being automated.

When I started this exercise a decade ago, the exercise was computationally
infeasible. The creation of new statistical methods and the development of
computational mathematics coupled with the increasing speed of computers has
changed this.

Not that I have talked with Valdis about it, but some of the feeds into the
model come from what Valdis did with DShield at VT. This is where Bayesian
methods come into play. Existing system data becomes the a prior for the
model which is coupled with other system calculations.

Regards,
Craig Wright
----------------------------------------------------------------------------

From: Thor (Hammer of God) [mailto:Thor@...merofgod.com] 
Sent: Saturday, 13 February 2010 7:39 AM
To: craig.wright@...ormation-Defense.com; Valdis.Kletnieks@...edu;
'Christian Sciberras'
Cc: 'McGhee, Eddie'; 'full-disclosure'; security-basics@...urityfocus.com
Subject: RE: [Full-disclosure] Risk measurements

OK, coconuts and flames aside, a serious question then…  You know, in case
I’m really missing something here…

Let’s move past the “probability of system compromise” in the sense of any
one system and look to what I think you are really getting at, which is “of
all the systems we have, some will get compromised, and here is how many” –
is THAT what you guys are getting at?  If so, I’ve got some questions:

It would seem to me that this would only be applicable to massive
installations where they’ve got heterogeneous OSs, huge software bases,
clients all over the place and multiple paths in and out to who knows were. 
 If that were the case, and we were going to base this on historical
“performance” of OSs and software already deployed, wouldn’t this type of
company ALREAY HAVE actual, historical information on compromise as it
refers to them?   Even if you did come up with some model as it applied
globally, wouldn’t a company’s internally gathered statistics be more
valid?  Further, even if your stats were absolutely valid, wouldn’t a
company look to their own stats first to see if your stats matched?  It
would seem that they would only embrace your stats as valid if they exactly
matched what they already knew, since it is based on actual history and not
future probability.  Again, even if your stats were based on some new genius
method you came up with (not being sarcastic, either) would it not be
destined to be compared with something a company already knew?  And if so,
since your figures matched their figures, their figures must also be valid,
and thus would show that they don’t need yours?

To me, the only way anyone would look to such statistics would be to see if
they could get away with spending less to fix something in the future than
what they spent in the past using your method as a basis for that decision. 
If your stats show a higher future probability of compromise than what
they’ve had in the past, there is no way they will spend more money up front
to potentially fix something than what they’ve had to spend in the past to
get it fixed.   So it seems that you would only have 1 shot at being right,
but a million shots at being wrong.  And if you ever were wrong, they would
blame you.   It seems like a “one-off potential win” scenario to me.  

t



From: Craig S. Wright [mailto:craig.wright@...ormation-Defense.com] 
Sent: Friday, February 12, 2010 11:40 AM
To: Valdis.Kletnieks@...edu; 'Christian Sciberras'
Cc: 'McGhee, Eddie'; 'full-disclosure'; security-basics@...urityfocus.com;
Thor (Hammer of God)
Subject: RE: [Full-disclosure] Risk measurements

Exactly,
As Valdis has stated, we want economic optimality. Valdis has stated this in
a far easier to understand manner than I.
I will publish a financial model on the blog this weekend that displays the
relationships graphically.
Regards,
...
Dr. Craig S Wright GSE-Malware, GSE-Compliance, LLM, & ...
Information Defense Pty Ltd
_____________________________________________
From: Valdis.Kletnieks@...edu [mailto:Valdis.Kletnieks@...edu]
Sent: Friday, 12 February 2010 11:31 PM
To: Christian Sciberras
Cc: craig.wright@...ormation-defense.com; McGhee, Eddie; full-disclosure;
security-basics@...urityfocus.com; Thor (Hammer of God)
Subject: Re: [Full-disclosure] Risk measurements
* PGP Signed by an unknown key
On Fri, 12 Feb 2010 13:09:55 +0100, Christian Sciberras said:
> There's a time for finding fancy interesting numbers and a time to get
> the system going with the least flaws possible.
You don't want "the least flaws possible".  We can get very close to zero
flaws per thousand lines of code - but the result ends up costing hundreds
of dollars per line.  You want "the most economical number of flaws" - if
you get it down to 10 flaws, and the next flaw will cost you $750,000 to
fix,
but you estimate your loss as $500,000 if you don't fix it and get hacked,
why are you spending $250,000 extra to fix the flaw?
> Why should any entity bother with risk modeling if it is not used at all?
> Here's the real question to the subject; What does risk modeling fix?
Risk modeling is what tells you the flaw will cost $500K to not fix.
And since you totally screw the pooch if you got it wrong and not fixing
it costs $1M, people like to do a good job of risk modelling.
* Unknown Key
* 0xB4D3D7B0

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ