[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <002f01c49109$52c85390$0200a8c0@Peteroffice>
From: peter at peterswire.net (Peter Swire)
Subject: Security & Obscurity: First-time attacks and lawyer jokes
Dave Aitel wrote detailed comments, which I appreciate, and I respond to
some of them here. Others in the threads have made some similar
comments.
> As the Japanese Proverb says, "Only painters and lawyers can change
> black to white."
>
> What are your goals with this paper? If you seem to have gotten a
mostly
> hostile response, than keep in mind that this is a ten year old debate
> in this, and other on-line forums, and that despite your previous
"White
> House Privacy Czarness", you don't have any information security
> background.
I plead guilty to being a lawyer and law professor, who has
practiced law, worked in the government, and taught for a bunch of
years. Small measures of self-defense on my lack of infosec background:
(1) I've taught semester courses on the Law of Cybersecurity twice in
the past two years. (2) I presented earlier versions of this paper
before technical audiences that included Bruce Schneier, Matt Blaze, and
lots of other IT experts and stayed up late at night trying to learn
from them. (That doesn't mean they agree with the paper, but a lot of
earlier flaws have been fixed.). (3) In government, I worked daily with
the people in OMB who were responsible for computer security for the
federal government. (4) For the past couple of years I have been on the
Microsoft Trustworthy Computing Academic Advisory Board, with IT experts
including Eugene Spafford and a bunch more. That has immersed me in a
lot of security discussions, and I have continued to talk with many Open
Source programmers as well.
In addition, legal academia often provides a lot of
> background for actual law. The laws (DMCA, etc) in this area are
> horribly dysfunctional, and if based on "research" such as your paper,
> only going to get more so. Furthermore, these awful, but well meaning
> laws directly impact the freedom of many people, hinder business, and
> generally cause misfortune even to the causes they claim to provide
for,
> such as "Homeland Security (tm)".
>
> If, as is suspected, you are trying to begin a legal framework for
> future laws which will put penalties on the disclosure of certain
kinds
> of information, or the groundwork for a government agency to mandate
> information security on private citizens, than you can expect a long
and
> bloody fight in this, and every other arena.
My belief is that the Department of Homeland Security and the
current Administration generally have gone far overboard in their
insistence on secrecy. The paper, by clarifying the military/intel
assumptions, seeks to show the relatively limited set of conditions
where the secrecy approach holds true in a networked world. Readers of
FD understandably are concerned that I am a secrecy nut, but in the
policy debates I am in fact much more likely to be supporting the
Freedom of Information Act and other openness initiatives than I am to
support secrecy and over-classification. More at
http://www.americanprogress.org/site/pp.asp?c=biJRJ8OVF&b=180516 and
http://www.americanprogress.org/site/pp.asp?c=biJRJ8OVF&b=180251 .
> The flaw in your specific example [about a software program freezing
up it is attacked] is that every program can be run as
> many times as you need to "attack" it. You would never need more than
> one copy.
First, there are times when you cannot attack the program over
and over. For instance, you may not have the ability to access the
software over and over again, such as when it is running on someone
else's system and you don't have continuous access. Second, other
persons on FD have written to me privately about self-modifying code
that would render Dave Aitel's point untrue. With that said, the
example could be better written.
Much more importantly, though, is that Dave accepts one of the
fundamental points of my paper in trying to refute it. He says "every
program can be run as many times as you need to attack it." Exactly!
The big difference between physical and computer security that I
emphasize is the number of attacks. Dave emphasizes the number of
attacks. Hey, it's a unifying principle that even lawyers and
non-experts can understand in the future! (See separate post today on
why the analogy between physical and cyber security is useful.)
A theme of the paper: when attacks are closer to first-time
attacks (when they have high uniqueness), then secrecy can be an
effective tool. When there is low uniqueness, security through
obscurity is BS. And many, many cyberattacks fall into the second
category.
> The paper goes into some sidetrack about people trying to find the
> hidden gems in video games - an activity that may or may not have
> something to do with computer security, but is clearly irrelevant
> ("fluffy") in this context.
My students love the video game part when they read it -- it
helps them see the similar patterns of cyberattacks, physical attacks,
and video game "attacks."
> Also, the paper doesn't do a good job of proving that the Efficient
> Capital Markets Hypothesis is relevant to the discussion. It's clearly
> true that attackers will gain a lot from disclosure, but the Open
Source
> model doesn't care, because they only have one way to fix their
software
> - disclose bugs. The paper even goes so far as to say the ECMH
probably
> doesn't apply. But if it doesn't apply, why mention it? (page 30
implies
> that the paper was simply suggesting it as an area for further
> "research", but that would make a better footnote than paper section).
> Adding to the fuzzyness feel is the way the paper reaches for an
analogy
> in another social science, and fails.
Some discussion on FD and elsewhere assumes that vulnerabilities
will be found very, very efficiently -- if the flaw is there, then it's
a matter of only a short wait before someone finds the flaw. In talking
with people who write software, however, I was repeatedly struck by
their observation that it takes considerable hard work and expertise to
find new vulnerabilities. The ECMH discussion gives reasons for
thinking that vulnerability discovery will, in some settings, be less
instantaneous than many seem to have assumed.
Peter
Paper at http://papers.ssrn.com/sol3/papers.cfm?abstract_id=531782
Powered by blists - more mailing lists