lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <1094148543.9497.75.camel@localhost.localdomain>
From: dave at immunitysec.com (Dave Aitel)
Subject: Security & Obscurity: First-time attacks and
	lawyer jokes

On Thu, 2004-09-02 at 12:24, Peter Swire wrote:

> > The flaw in your specific example [about a software program freezing
> up it is attacked] is that every program can be run as
> > many times as you need to "attack" it. You would never need more than
> > one copy. 
> 
> 	First, there are times when you cannot attack the program over
> and over.  For instance, you may not have the ability to access the
> software over and over again, such as when it is running on someone
> else's system and you don't have continuous access.  Second, other
> persons on FD have written to me privately about self-modifying code
> that would render Dave Aitel's point untrue.  With that said, the
> example could be better written.
> 

This is an example of a disparity in the word "attack" which I'm having
a hard time explaining. "Attacking" a program means to analyze it for
vulnerabilities. It does not mean to actually attack a networked host,
which is the other usage of the term. I hesitate to discuss web
application attacks in this conversation simply because it would further
muddy the waters, but they are one of the very few classes of attack
where you would research vulnerabilities in a program not hosted in your
environment. But your paper uses the example to try to prove a general
rule. In general, I control the program I am researching entirely.
Self-modifying code has no effect on me, because I can always reverse
state to the Big Bang, as far as the program is concerned.


> 	Much more importantly, though, is that Dave accepts one of the
> fundamental points of my paper in trying to refute it.  He says "every
> program can be run as many times as you need to attack it."  Exactly!
> The big difference between physical and computer security that I
> emphasize is the number of attacks.  Dave emphasizes the number of
> attacks.  Hey, it's a unifying principle that even lawyers and
> non-experts can understand in the future!  (See separate post today on
> why the analogy between physical and cyber security is useful.)
> 

I believe you are trying to put words in my mouth here. When you say
attack, it's very different from when I say attack. There are MANY
general differences between physical and computer security and there are
fundamental specific differences (which is why the paper's analogies are
worthless). 

The Heisenberg Uncertainty Principle is a unifying principle that even
lawyers and non-experts can understand, but that doesn't mean they
should attempt to influence legislature or policy based on it. This
paper has failed to prove its "uniqueness" principle in the first place,
which is certainly less robust under examination than the Uncertainty
Principle.


> 	A theme of the paper: when attacks are closer to first-time
> attacks (when they have high uniqueness), then secrecy can be an
> effective tool.  When there is low uniqueness, security through
> obscurity is BS.  And many, many cyberattacks fall into the second
> category.
> 

This is a very broad claim which the paper fails to support in any real
way. It also fails to define things like "effective". These days,
"effective" means "return on investment", but it's unclear what the
paper is using it to mean, and it's unclear that the paper can support a
claim using a "return on investment" definition.


>  
> > The paper goes into some sidetrack about people trying to find the
> > hidden gems in video games - an activity that may or may not have
> > something to do with computer security, but is clearly irrelevant
> > ("fluffy") in this context.
> 
> 	My students love the video game part when they read it -- it
> helps them see the similar patterns of cyberattacks, physical attacks,
> and video game "attacks."
> 

This is exactly why it's so dangerous and should be stricken - because
it implies patterns and associations between two disparate fields (game
playing and information security) that are not really there. As for
"discussions" with various experts - heresay testimony is normally
stricken from court record, correct? Likewise, I hang around with a few
financial experts, but I don't spend a lot of time writing papers on
economic theory, no matter how late I stay up with them while they
slowly explain the theory to me.


<snip quoted text>
> 
> 	Some discussion on FD and elsewhere assumes that vulnerabilities
> will be found very, very efficiently -- if the flaw is there, then it's
> a matter of only a short wait before someone finds the flaw.  In talking
> with people who write software, however, I was repeatedly struck by
> their observation that it takes considerable hard work and expertise to
> find new vulnerabilities.  The ECMH discussion gives reasons for
> thinking that vulnerability discovery will, in some settings, be less
> instantaneous than many seem to have assumed.
> 

And the right person to be writing a paper of this sort would be someone
who's had significant experience finding flaws in software. And, in
fact, there are many such papers. They come out daily, almost. So much
so, that I released one as a joke a couple weeks ago called "Total Cost
of 0wnership" - (It had metrics and tables! I'll put an equation in my
next one as soon as I figure out how to get OpenOffice to do that).
There's classification trees for vulnerabilities and exploits, return on
investment white-papers, and software vulnerability frequency metrics of
all sorts. None of these papers has generated anything more worthy of
thought than marketing material.

If you want to know whether or not it's hard to do - try to pay someone
to do it. Most vulnerability researches make between the salary of a
gas-station attendant and a first year lawyer. So it's about that
"hard". (See, and I've used the ECMH here to prove it :>)

You didn't address my claims that the scientific validity of the paper
was extremely weak, due to the lack of reproducibility and testability.
Perhaps I was reading too much into the format of the paper. If it was
not an attempt to do valid research and analysis of the subject, but
simply a plan to fool a few legal students into thinking they understood
the basics of information security, than it's most likely perfectly
fine. If you're trying to prove a point which the FBI and CIA can use to
determine how to run a war, then I believe you're massively off target.
Unless it's part of some "bodyguard of lies" in which case, I'm sorry
for giving the game away.

Dave Aitel
Immunity, Inc.

> 	Peter
> 
> 
> Paper at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=531782
> 
>  
> 


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ