lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
From: peter at peterswire.net (Peter Swire)
Subject: Response to comments on Security and Obscurity

	Some responses to the first morning worth of comments.  A big reason for
posting the paper to Full Disclosure was to make the paper less stupid -- to
learn from the list.  I've been working on this topic since I left the White
House in early 2001, where I worked on privacy and computer security issues
including the Federal Intrusion Detection Network, etc.  A 2001 version of
the paper needed a lot of work, and is still on the publications page of my
web site as a work in progress ("What Should be Hidden or Open in Computer
Security?").  I've presented this stuff quite a few times in front of
technical audiences since, and continue to seek to improve it.  I continue
to think that this is an important topic -- for computer security and
Homeland Security and physical security (especially after all the
pro-secrecy actions since 9/11), when is secrecy at all justifiable, and
when instead does it lead to bad security in addition to bad accountability?

	Stephane Nedrowsky writes: "It seems 'full' is limited to
algorithms, and do not extend to secrets (such as passwords), what would
be the use of a safe if the secret (either the code or the key) is
written on the door (I know .. in case of fire, a safe is safer than the
fireman and its water). It looks like computer and military security are
not so different."

	Peter: I emphasize (p.23) something that everyone on this list knows, that
passwords and similar secrets should remain secret.  If people on the list
think "computer and military security are not so different" then perhaps the
paper will have spurred some fruitful comparisons.

	Stephane Nedrowsky separately writes with a cite to Kerckhoffs.  My crypto
discussion builds on the Kerckhoffs assumption, that the crypto algorithm
should be designed to withstand full disclosure of the algorithm.  One way
to frame the paper is to ask: "how generalizable is that assumption?"  I try
to show a series of settings where generalizing to "no security through
obscurity" is likely to be incorrect.

	Dave Aitel writes correctly that I've never written an exploit but then
concludes that the paper is therefore "academic fluff."  Perhaps.  Dave --
to reduce my stupidity, can you explain the flaw in the quote I give?  I
thought I was explaining how much easier it is to probe secrets when the
attacker can attack over and over again.  The idea of the "first time
attack" or the "uniqueness of the attack" is a unifying theme that has
helped me analyze when secrecy is most likely to help a defender.  If you
disagree with that conclusion, please explain why.

	Dave Aitel also criticizes analogies of computer and physical security.  Is
that topic strictly off-limits for discussion?  Yes, sometimes information
can be copied but chairs cannot.  Does that change everything about
security?  The paper proposes explanations for why computer and physical
security are often different, because computer security often features a
high number of attacks, learning by attackers from each attack, and
communication among attackers.  At the same time, some physical situations
have those same features. Where is the flaw in that analysis?

	Chief Gadgeteer says "the premise laid out up to this point are sand."  He
then says he stopped reading at page 8.  If he reads the entire paper (which
answers a bunch of his other objections), then I'll comment.

	Barry Fitzgerald writes a really thoughtful critique based on the nature of
Free Software ideology.  He says that "there aren't that many real-world
differences between the military paradigm and the Open Source paradigm
regarding secrecy of proprietary information."  He seems to be saying that
the real debate is what the scope should be of proprietary information.  I'm
going to think more about what he has said.

	Best,

	Peter






Prof. Peter P. Swire
Moritz College of Law of the
    Ohio State University
John Glenn Scholar in Public Policy Research
(240) 994-4142; www.peterswire.net

-----Original Message-----
From: Barry Fitzgerald [mailto:bkfsec@....lonestar.org]
Sent: Wednesday, September 01, 2004 10:49 AM
To: Peter Swire
Cc: full-disclosure@...ts.netsys.com
Subject: Re: [Full-Disclosure] New paper on Security and Obscurity


Peter Swire wrote:

>Greetings:
>
>	I have been lurking on Full Disclosure for some time, and now would like
to
>share an academic paper that directly addresses the topic of ?full
>disclosure? and computer security:
>
>
>

Hello Peter,

There are some glaring flaws in the the basis of this paper. Though I
tend to agree with the abstract theme of the paper (being that there is
both a place for secrecy and a place for disclosure) I disagree with the
very basis of the analysis. It seems to oversimplify both the military
position and the "Open Source and Encryption" position. Further, it also
misrepresents the arguments of disclosure advocates.

The paper makes the assumption (without adequate evidence) that the
military and Open Source positions are fundamental opposites when
juxtaposed. In actual practice, this couldn't be further from the truth.
I'm not saying that primary military policy isn't to maintain a state of
secrecy and that Open Source ideology dictates disclosure; that much is
blatantly true. However, in order for your model to work, these
oversimplifications have to be put into their actual context in order to
be understood.

First and foremost, when talking about disclosure most Free Software and
Open Source advocates are referring to disclosure regarding "things"
that they have direct access to. They're referring to programs that are
distributed to them. In fact, this is written into the archetype Free
Software document, the GNU General Public License. If I write a program
and never distribute it to you, I have absolutely no (0) obligation to
disclose anything about the program to you. Similarly, if I modify a GNU
GPL'ed program and don't distribute it, I have no obligation to disclose
anything. I can even distribute the program to an isolated set of people
and I still have no obligation to share any information with you if you
aren't one of the recipients. (note: in this economy, the program will
probably get distributed and disclosure will eventually occur because
the people I distribute it to can choose to distribute it -- but, they
might not choose to.) Any customizations I make can stay secret -- it's
written into the ideology and practice.

You can extend this to identify the *true* rule of disclosure in the
Free Software and Open Source movement: If you "own" something (though
software is not exactly owned by the user) you should have the right to
be able to modify it to fit your needs. In order to have this right,
disclosure must occur. Hence, disclosure only counts towards items that
are openly distributed. Full disclosure in the market sense.

This is a fundamental point because the military secrecy argument
applies almost exclusively to proprietary information utilized almost
exclusively by the military. I can't own a trident missile so therefore
not having access to its design schematics is not counter to Free
Software/Open Source ideology.

Now we get into a little cultural history and applying this to society
in general. The Free Software movement does have, within its roots, the
ideological belief that information "wants" to be free. All information
will eventually get out and therefore, relying on secrecy is foolish.
This is fundamentally true. It's fundamentally true because it only
applies to information that the person comes in contact with. If I have
a black box that has some function but it's locked by the manufacturer,
I can eventually gleen information out of it -- enough to discover its
secrets. There is no way to hide secrets indefinitely.

The military doesn't even hide secrets indefinately. There is a limit to
how long information can be regarded as top secret. Eventually all
secrets are disclosed, if they're sufficiently interesting enough that
someone would look for them. To the context of our society, this is
absolutely essential. Without information disclosure, you have a
dictatorial tyrrany. Participation in the system is essential for
democracy, but perhaps even more essential is open access to the secrets
of the "democratic" nation. Without access to this information, the
polis is making decisions blindly. Thus, said society would only be a
democracy in name and not in function.

As the information distribution context, in either case, has to be taken
into effect -- I think that once this is done, you'll see that there
aren't that many real-world differences between the military paradigm
and the Open Source paradigm regarding secrecy of proprietary
information. The difference is the belief in whether or not disclosure
of infrastructure can create an economic benefit. Note that I'm
referring to specialized infrastructure (like, say, a corporate network)
and not a generalized infrastructure. The reason for keeping trident
missile design specs secret, for example, is to keep "enemies" from
reproducing them. This is a very specialized motivation and has to be
taken into account when analyzing the issue. To understand the
comparrison, consider how many public projects the military runs and how
much public infrastructure they use. The military does actively benefit
on a regular basis from technical disclosure. I think you'll find that
they military is much more open than it advertises itself as.

A flaw in the basis of the analysis can bring into question the entire
method of analysis.

-Barry

p.s. It's good that someone is trying to tackle this issue. I do have to
agree with Dave Aitel, though, and say that you should not publish this
until you are 100% certain that it is accurate, which is may never be.
This kind of paper can be very influential and should be done with great
care. If incorrect conclusions are gleened from the data, it could be
catastrophic.



Powered by blists - more mailing lists