lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
From: dave at immunitysec.com (Dave Aitel)
Subject: Response to comments on Security and
	Obscurity

As the Japanese Proverb says, "Only painters and lawyers can change
black to white."  

What are your goals with this paper? If you seem to have gotten a mostly
hostile response, than keep in mind that this is a ten year old debate
in this, and other on-line forums, and that despite your previous "White
House Privacy Czarness", you don't have any information security
background. In addition, legal academia often provides a lot of
background for actual law. The laws (DMCA, etc) in this area are
horribly dysfunctional, and if based on "research" such as your paper,
only going to get more so. Furthermore, these awful, but well meaning
laws directly impact the freedom of many people, hinder business, and
generally cause misfortune even to the causes they claim to provide for,
such as "Homeland Security (tm)". 

If, as is suspected, you are trying to begin a legal framework for
future laws which will put penalties on the disclosure of certain kinds
of information, or the groundwork for a government agency to mandate
information security on private citizens, than you can expect a long and
bloody fight in this, and every other arena.


Further comments in-line with previous text.

On Wed, 2004-09-01 at 11:27, Peter Swire wrote:

<snip>
> 	Dave Aitel writes correctly that I've never written an exploit but then
> concludes that the paper is therefore "academic fluff."  Perhaps.  Dave --
> to reduce my stupidity, can you explain the flaw in the quote I give?  I
> thought I was explaining how much easier it is to probe secrets when the
> attacker can attack over and over again.  The idea of the "first time
> attack" or the "uniqueness of the attack" is a unifying theme that has
> helped me analyze when secrecy is most likely to help a defender.  If you
> disagree with that conclusion, please explain why.
> 

The flaw in your specific example is that every program can be run as
many times as you need to "attack" it. You would never need more than
one copy. Think "The Matrix" if you want a physical analogy. Or those
replicators in Star Trek. Another information theory point is that you
can halfway know a secret, whereas I can't halfway steal your chair
(especially not without you knowing). My comment about writing exploits
was meant to imply that people who undertake the discipline of finding
and writing exploits and attacking networks also learn a lot of theory
instinctively, which would have prevented them from making similar
mistakes as are made in the paper.

In many places the paper clearly confused "finding vulnerabilities",
"writing exploits" and "running attacks" which are three different
things, which the paper calls "attack" and then try to make analogies
about, circling back to meaningless equations where necessary. This
trend continues throughout the paper until page 28.

The paper goes into some sidetrack about people trying to find the
hidden gems in video games - an activity that may or may not have
something to do with computer security, but is clearly irrelevant
("fluffy") in this context.

A key missing concept in the paper is due to the fuzzyness of "benefit."
Every time the paper examines the benefit, it appears to do so
one-dimensionally, as if there was a "good for the society" benefit that
outweighed all other considerations. But the benefit of disclosure
varies highly between all of the actors in the game. For example, it is
very costly for Microsoft to disclose a vulnerability, and very good for
for some attackers, and very good for some defenders and very bad for
others in both camps. I.E. If I have 0day in Kerberos 5, then it annoys
me when someone discloses such. 

Also, the paper doesn't do a good job of proving that the Efficient
Capital Markets Hypothesis is relevant to the discussion. It's clearly
true that attackers will gain a lot from disclosure, but the Open Source
model doesn't care, because they only have one way to fix their software
- disclose bugs. The paper even goes so far as to say the ECMH probably
doesn't apply. But if it doesn't apply, why mention it? (page 30 implies
that the paper was simply suggesting it as an area for further
"research", but that would make a better footnote than paper section).
Adding to the fuzzyness feel is the way the paper reaches for an analogy
in another social science, and fails.

And if the model can't make predictions on whether open source software
is more secure than closed source software, what can it do? Is it a
model for modeling's sake? Surely, this is an area where E,N,L,C,A-D,
and A-T can be given some guesstimates and plugged into the equation?
Nowhere in the paper is the basic structure of the equation "f"
specified. 

I could write "The security of any piece of software is a function
S=f'(T,E,N,L,C) where it has all the variables from f, plus T where T is
the length of time my turtle, Turtle-I, spent sunning itself today while
I thought about your software" and without some basic idea of f', it can
be just as accurate (probably more so since it includes an actual data
point). 

The paper claims to identify a systematic way for identifying the costs
and benefits(!) of disclosure for security (whose?), but fails to prove
that the system includes all the variables needed to provide valid
conclusions, fails to demonstrate the system against a real world
problem, and fails to clearly identify the model itself so the results
can be tested by third parties. 

Other flawed analogies in the paper include, but are not limited to,
"soldier" analogies and "castle/house" analogies. The rule of thumb is
"If you are using a house analogy, you have stopped saying anything
interesting about information security." If I walk into a the Supreme
Court, and base my arguments on analogies to passages in the Bible, it
doesn't matter how many Former's I have after my name, I'm still going
to lose the case.

Dave Aitel
Immunity, Inc.
646-327-8429



> 	Dave Aitel also criticizes analogies of computer and physical security.  Is
> that topic strictly off-limits for discussion?  Yes, sometimes information
> can be copied but chairs cannot.  Does that change everything about
> security?  The paper proposes explanations for why computer and physical
> security are often different, because computer security often features a
> high number of attacks, learning by attackers from each attack, and
> communication among attackers.  At the same time, some physical situations
> have those same features. Where is the flaw in that analysis?
<snip>
> 	Best,
> 
> 	Peter
> 
> 
> 
> 
> 
> 
> Prof. Peter P. Swire
> Moritz College of Law of the
>     Ohio State University
> John Glenn Scholar in Public Policy Research
> (240) 994-4142; www.peterswire.net
> 
> -----Original Message-----
> From: Barry Fitzgerald [mailto:bkfsec@....lonestar.org]
> Sent: Wednesday, September 01, 2004 10:49 AM
> To: Peter Swire
> Cc: full-disclosure@...ts.netsys.com
> Subject: Re: [Full-Disclosure] New paper on Security and Obscurity
> 
> 
> Peter Swire wrote:
> 
> >Greetings:
> >
> >	I have been lurking on Full Disclosure for some time, and now would like
> to
> >share an academic paper that directly addresses the topic of ?full
> >disclosure? and computer security:
> >
> >
> >
> 
> Hello Peter,
> 
> There are some glaring flaws in the the basis of this paper. Though I
> tend to agree with the abstract theme of the paper (being that there is
> both a place for secrecy and a place for disclosure) I disagree with the
> very basis of the analysis. It seems to oversimplify both the military
> position and the "Open Source and Encryption" position. Further, it also
> misrepresents the arguments of disclosure advocates.
> 
> The paper makes the assumption (without adequate evidence) that the
> military and Open Source positions are fundamental opposites when
> juxtaposed. In actual practice, this couldn't be further from the truth.
> I'm not saying that primary military policy isn't to maintain a state of
> secrecy and that Open Source ideology dictates disclosure; that much is
> blatantly true. However, in order for your model to work, these
> oversimplifications have to be put into their actual context in order to
> be understood.
> 
> First and foremost, when talking about disclosure most Free Software and
> Open Source advocates are referring to disclosure regarding "things"
> that they have direct access to. They're referring to programs that are
> distributed to them. In fact, this is written into the archetype Free
> Software document, the GNU General Public License. If I write a program
> and never distribute it to you, I have absolutely no (0) obligation to
> disclose anything about the program to you. Similarly, if I modify a GNU
> GPL'ed program and don't distribute it, I have no obligation to disclose
> anything. I can even distribute the program to an isolated set of people
> and I still have no obligation to share any information with you if you
> aren't one of the recipients. (note: in this economy, the program will
> probably get distributed and disclosure will eventually occur because
> the people I distribute it to can choose to distribute it -- but, they
> might not choose to.) Any customizations I make can stay secret -- it's
> written into the ideology and practice.
> 
> You can extend this to identify the *true* rule of disclosure in the
> Free Software and Open Source movement: If you "own" something (though
> software is not exactly owned by the user) you should have the right to
> be able to modify it to fit your needs. In order to have this right,
> disclosure must occur. Hence, disclosure only counts towards items that
> are openly distributed. Full disclosure in the market sense.
> 
> This is a fundamental point because the military secrecy argument
> applies almost exclusively to proprietary information utilized almost
> exclusively by the military. I can't own a trident missile so therefore
> not having access to its design schematics is not counter to Free
> Software/Open Source ideology.
> 
> Now we get into a little cultural history and applying this to society
> in general. The Free Software movement does have, within its roots, the
> ideological belief that information "wants" to be free. All information
> will eventually get out and therefore, relying on secrecy is foolish.
> This is fundamentally true. It's fundamentally true because it only
> applies to information that the person comes in contact with. If I have
> a black box that has some function but it's locked by the manufacturer,
> I can eventually gleen information out of it -- enough to discover its
> secrets. There is no way to hide secrets indefinitely.
> 
> The military doesn't even hide secrets indefinately. There is a limit to
> how long information can be regarded as top secret. Eventually all
> secrets are disclosed, if they're sufficiently interesting enough that
> someone would look for them. To the context of our society, this is
> absolutely essential. Without information disclosure, you have a
> dictatorial tyrrany. Participation in the system is essential for
> democracy, but perhaps even more essential is open access to the secrets
> of the "democratic" nation. Without access to this information, the
> polis is making decisions blindly. Thus, said society would only be a
> democracy in name and not in function.
> 
> As the information distribution context, in either case, has to be taken
> into effect -- I think that once this is done, you'll see that there
> aren't that many real-world differences between the military paradigm
> and the Open Source paradigm regarding secrecy of proprietary
> information. The difference is the belief in whether or not disclosure
> of infrastructure can create an economic benefit. Note that I'm
> referring to specialized infrastructure (like, say, a corporate network)
> and not a generalized infrastructure. The reason for keeping trident
> missile design specs secret, for example, is to keep "enemies" from
> reproducing them. This is a very specialized motivation and has to be
> taken into account when analyzing the issue. To understand the
> comparrison, consider how many public projects the military runs and how
> much public infrastructure they use. The military does actively benefit
> on a regular basis from technical disclosure. I think you'll find that
> they military is much more open than it advertises itself as.
> 
> A flaw in the basis of the analysis can bring into question the entire
> method of analysis.
> 
> -Barry
> 
> p.s. It's good that someone is trying to tackle this issue. I do have to
> agree with Dave Aitel, though, and say that you should not publish this
> until you are 100% certain that it is accurate, which is may never be.
> This kind of paper can be very influential and should be done with great
> care. If incorrect conclusions are gleened from the data, it could be
> catastrophic.
> 
> 
> _______________________________________________
> Full-Disclosure - We believe in it.
> Charter: http://lists.netsys.com/full-disclosure-charter.html


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ