[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20040902210537.GA9197@bozorky.foofus.net>
From: foofus at foofus.net (Mr. Rufus Faloofus)
Subject: Security & Obscurity: First-time attacks and lawyer jokes
On Thu, Sep 02, 2004 at 12:24:29PM -0400, Peter Swire wrote:
[snip]
> A theme of the paper: when attacks are closer to first-time
> attacks (when they have high uniqueness), then secrecy can be an
> effective tool. When there is low uniqueness, security through
> obscurity is BS. And many, many cyberattacks fall into the second
> category.
Hrm. This seems to me like circular reasoning, but it might take
me a little effort to explain; I hope you'll bear with me. One
commonplace strategy for prioritizing risks is to estimate the
rate of an undesirable event's occurrence (say, an Annual Rate
of Occurence) and multiply that number by an estimate of such
an event's severity (say a Single Loss Expectancy).[1]
Thus, relative metrics can be derived: Risk = ARO * SLE
It seems to me that security by obscurity is an attempt to reduce
risk by hiding things as a means of minimizing ARO. This is,
however, a "brittle" defense[2], in the sense that as soon as the
hidden attack surface is identified, attacks tend to become readily
reproducible. Put another way, if obscurity is your main security
tactic, once the obscurity is lifted, that defensive measure is
wholly negated.
So, in the case of a system defended by obscurity, to say that
an attack has "low uniqueness" seems to me to be putting the
cart before the horse. If an attack has "low uniqueness" then
has not the obscurity already been circumvented? Likewise, if
the proper vector for attack is not known (i.e., the veil of
security still lies thick and luxurious over the system's
exploitable condition), is it really fair to say that secrecy
can be "an effective tool?"
If we take a strongly literal interpretation of "effective,"
perhaps this makes sense: there have not yet been any known
attacks, so the effective security is perfect. The catch is
that once the first successful attack takes place, the attacker
may publish his or her discovery about our system's weakness,
and attacks can instantly jump from high uniqueness to very
low uniqueness.
One important objective of security is to aid us in managing
risk over time, not just on an instantaneous basis. We want
systems that fail gracefully in the event of trouble: all-or-
nothing security strategies (i.e., those that allow risk to
escalate rapidly and in a manner outside of our control) are
undesirable. In the service of this end, then, secrecy isn't
really an "effective" tool: should our weakenss become no
longer secret, obscurity loses its efficacy. In fact, it has
evaporated more or less entirely.
The statement "when there is low uniqueness, security through
obscurity is BS" is misleading, in the sense that the condition
of low uniqueness implies that obscurity has *already* been
defeated. Likewise, to say that "many cyberattacks fall into
the second category [i.e., low uniqueness]" also misses the
mark, in the sense that automated/scripted attacks *depend* on
knowledge of how to succeed.
This is the classic story of the 0-day exploit. In a sense,
each and every system is defended by the fact that the means
of compromising it are unknown. Once they become known, it's
a good idea to address the vulnerability that's being exploited,
rather than just finding a way to make it secret again, or else
we'll be in the very same position later, when the flaw is
rediscovered.
Full-disclosure is about the belief that we can actually reduce
our risk overall by trading away some of our "high uniqueness."
The absence of successful attacks might feel good, but I'd
sacrifice some of that feeling in exchange for better assurance
that my defenses won't crumble entirely the minute a discovery
is made.
But I digress. The point I'm trying to make is that the theme
you're drawing out in the paragraph above is pleonastic in nature:
a condition of "high uniqueness" doesn't suggest that obscurity
can be an effective tool. "High uniqueness" means precisely that
viable vectors of attack are not widely known. Likewise "low
uniqueness" doesn't mean we should abandon obscurity: by the time
we have "low uniqueness," obscurity is long gone, because the
condition itself implies that vectors of attack are well known.
--Foofus.
[1] Some people may suggest refinements to this scheme, or
variations on the equation, or different units of measurement.
I have no desire to quibble about which scheme is the absolute
best. This one is nice and simple, though.
[2] I use this term in the sense that Schneier does (BEYOND FEAR);
I cite this, because the term might get used in other senses
elsewhere.
Powered by blists - more mailing lists