[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Message-ID: <011801caab52$bd266b00$37734100$@wright@Information-Defense.com>
Date: Fri, 12 Feb 2010 06:45:15 +1100
From: "Craig S. Wright" <craig.wright@...ormation-Defense.com>
To: "'Thor \(Hammer of God\)'" <Thor@...merofgod.com>,
"'full-disclosure'" <full-disclosure@...ts.grok.org.uk>,
<security-basics@...urityfocus.com>
Cc: "'McGhee, Eddie'" <Eddie.McGhee@....com>
Subject: Risk measurements
The simple answer to these posts is that I am passionate about this topic.
This has allowed me to be drawn into a flame war with Tim, something he is
far better at.
Risk and economics matter to security. Like it or not, money is a limited
resource and spending it on the most effective measures that return more
effective results means something. Going to management with another request
for more money means taking funds from some other place where it may be
better utilised.
In a few weeks I am submitting a series of papers on risk modelling. These
are being submitted to IEEE and other peer reviewed papers. Together, these
form the foundation of an expert system. As Tim and others assert, the use
of mathematically based systems is not perfect. This is what probability
means. I have not aimed at perfection, that is a fools errant. I have aimed
at economical optimality. This is the best result for the best economic
return. This can be argued in a heated debate, but the matter is not
These papers will be public domain. At this point, the answer is simple, the
assertions I make in them can be tested. I do not assert that they will lead
to perfect calculations of what will occur. If this was true, it would not
be risk. By its very definition, risk is a probabilistic function. Many
people in the industry seem to forget this.
An expert system does not have to be perfect to have value. It needs to be
better than what we do now. What we do now is commonly no better than taking
one number that an expert makes up and multiplying this by another made up
number. A system that works within a confidence bound will miss some
instances of attack. By definition. The difference is that the number of
errors can also be predicted. You may not know which system gets
compromised, but you can estimate how many will be compromised over a time
period. For an organisation this has value.
This matters as management can see make a choice based on reason. Some
servers get compromised, but the cost of this occurring can be planned and
if the cost of a compromise is less than the fix, then the fix is not
effective.
"Everybody knows that you can't model risk".
Once, everybody know that the earth was the centre of the universe. That the
stars are just holes in the carpet of the sky. Rhetoric has no scientific
value. Some people, such as Tim may use this in a demagogical manner to
cover the facts. This is a common political attack. The issue is that it has
no alignment to truth. Truth is based on fact. The scientific method is a
valid measure and little else is.
So, slur me, attack my character, and do whatever else seems fit. The end
result is that I shall publish later this year. These will be in peer
reviewed journals and conferences.
I cannot win at a flame war nor against rhetoric. I am not inclined to be a
sophist. The simple answer will come from testing the models and systems I
shall be publishing. If they do better than existing risk guessing, they are
valuable. If they save money, they are valuable.
Regards,
...
Dr. Craig S Wright GSE-Malware, GSE-Compliance, LLM, & ...
Information Defense Pty Ltd
_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.grok.org.uk/full-disclosure-charter.html
Hosted and sponsored by Secunia - http://secunia.com/
Powered by blists - more mailing lists