[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <6.2.0.14.2.20060216040826.05ae4080@ranum.com>
Date: Thu, 16 Feb 2006 06:05:54 -0500
From: "Marcus J. Ranum" <mjr@...um.com>
To: "Craig Wright" <cwright@...syd.com.au>,
<self-destruction@...best.com>, <bugtraq@...urityfocus.com>
Subject: RE: Vulnerabilites in new laws on computer hacking
self-destruction@...best.com apparently writes:
>"Advanced societies" are updating computer crime laws faster than the
>rest of the world. This means that new generations of these more
>"advanced societies" will have no clue about how remote computer attacks
>are carried out. Future generations of security "experts" will be among
>the most ignorant in the history of computer security.
This is an interesting assertion: Legislation causes ignorance. I'll get
to that in a moment but let me comment on the "advanced societies"
issue and offer a different perspective.
Advanced societies (as you call them) are the technological and
economical fast-movers; the ones that have invested heavily across
their economies in high tech, which means IT. One way of looking at
the tightening of legislation in those societies is that it is a reaction
to the disproportionate pain that technologically advanced societies
suffer as a consequence of cyber-crime. As a society becomes
increasingly dependent on computing, the cost of protecting those
computers becomes a large item in the "expense" column. So
perhaps you might look at it from this perspective:
The reaction of the technologically advanced societies to cyber-crime
is a harbinger of how EVERY society will react to cyber-crime as they
move up the economic chain. The reason the rest of the world is not
reacting as the advanced societies are is because they can still
afford not to.
Put differently - if you think it's bad (from your perspective) now, you
ain't seen nothin' yet!
>New generations of teenagers will be scared of doing online exploration.
This is a ridiculous assertion - if you were correct that legislation
significantly stifles criminal activity, then the US would be winning
the "War On Drugs" right? After all, drugs are broadly illegal in the
US and - well - are teenagers scared of exploring dope? Hmmm...
Maybe not.
Let's look at another aspect of that: I know of no society that is
placing serious restrictions on an individual's ability to explore
his/her own systems. I worded that carefully because the DRM
and "Super DMCA" have gotten a lot of press over the implication
that a user might not be able to "explore" data that they bought
and paid for (i.e.: an application or a DVD) but actually the
debate there is whether the user actually owns the data or
merely owns the right to access the data, and that's a contract
law question more than anything else. However, I know of no
advanced society that says I can't buy a bunch of computers and
copies of VMWare or honeyd or whatever and build myself a
lab LAN/WAN and hammer it however I please, destructively
"test" the software on it, etc. Perhaps I might not be motivated
to go to the trouble of setting up my own test LAN, or I might
not have the financial or intellectual resources to do so - but
those are not a result of fear. There is nothing to stop groups
of teen agers (to stick with your example) from playing capture
the flag LAN parties with their own machines, as long as
they're consenting - they're welcome to have at it as long as
their electricity and caffeine supply holds out!!
So - unlike the situation with drugs - there's STILL a perfectly
legitimate avenue for socially sanctioned exploration. And we
have seen that the "War On Drugs" hasn't exactly had a very
great cooling effect on dope use - so I simply can't accept
your assertion that teen agers are somehow going to be
terrified to explore computers.
... but is that what you're talking about, really? I don't think so.
>I'm not talking about damaging other companies' computer systems. I'm
>talking about accessing them illegally *without* revealing private
>information to the public or harming any data that has been accessed.
*Aha*
So you're talking about "exploring" someone's computer
without their permission.
In virtually every society (not just the advanced ones) that have a
notion of property, there is a notion of property rights. The very
notion of property rights argues that _I_ have the right to control
how _MY_ property is used. In fact, property rights make up
the core of the social contract and the rule of law - i.e.: trespassing
is a very, very old crime. "Exploring" someone's computer without
their permission is a violation of their property rights, pure and
simple. In any society under the rule of law, under virtually any
moral and legal system I have encountered, there are protections
that govern intrusion into another's property.
So, I believe you're being intellectually dishonest calling such
actions "exploring" - "trespassing" might be a better word, as
a starting point for further dialog.
It sounds like you're adopting the position that I've often
heard voiced by trespassers, namely, "I didn't do any
harm" or "I was passing through" etc - which is not a
tenable position in the face of ANY attempt by the property
owner to give notice that intrusion is unwelcome. For
example, in most US States, ignoring a "No Trespassing"
sign is a criminal act. I would argue that there's an exact
mapping between the circumstance of my posting a
"No Trespassing" sign on my property and my installing
a firewall on my Internet connection. It gets more complex
when you consider that in some jurisdictions it is not
even necessary to post a "No Trespassing" sign to
assert your property rights. Indeed, I am not required
to post a "No Stealing" sign on my car when I park
it, NOR am I required to lock it in order to assert the
full protection of the law. This maps exactly to the
situation in which I have an internet connection with
no firewall at all; the fact that I do not have a "No Exploring"
sign is NOT an implicit invitation to explore. Never
mind complex moral philosophies - common courtesy
requires that one ask permission before going where
they are not invited.
> To
>me, there is a big difference between these two types of attacks but I
>don't think that judges feel the same way. Furthermore, I don't even
>think that judges understand the difference.
In most of the "advanced societies" you are referring to, it is not
a judge who makes this determination - it is a jury.
Whether a jury understands or does not understand, or
respects or does not respect, a distinction YOU choose to
make is irrelevant. You are playing self-serving semantic
games. By asserting that you feel there is a difference between
one type of trespassing (the kind you deem harmful) and
another type of trespassing (the kind you deem harmless)
you are creating a distinction of convenience only to yourself.
Does the law recognize such a distinction? I submit to you
that unless the laws recognize "harmless trespass"
versus "harmful trespass" then you are on shaky ground.
Put differently, if you trespass on my property, I do not
have to care about your intent. I merely have to care that
you violated my property rights. Your "I meant no harm"
argument is part of your plea for clemency once you've been
convicted and it's time to pass sentence.
>Now, I'm not saying that I support accessing computer systems illegally.
Actually, you do appear to be saying that. Or, more precisely, you
appear to be saying that by enforcing our existing property rights
over our computers, we (the computer owners of the world) are
going to somehow increase the level of ignorance about computer
security.
That's a ridiculous position!
>All I'm saying is that by implementing very strict laws on "hacking", we
>will create a generation of ignorant security professionals. I think to
>myself, how the hell will these "more advanced societies" protect
>themselves against cyber attacks in the future?
Those more "advanced societies" will protect themselves
quite well, for a number of reasons. First off, because they
have more at stake, they will be obligated to (waste) invest
more time preparing to stave off trespassers. As your stake
increases your motivation to preserve it increases accordingly.
This is part of the tyranny of the cyber-criminal: the cost they
force the innocent to incur is disproportionate. An E-banking
company may spend hundreds of thousands of dollars to
build a defensible network whereas my grandmother might
begrudge $19.95 for an antivirus package. Disproportionate
spending will result in, unfortunately, a disproprtionate demand
for defensive expertise.
What does that mean? That means you'll be dealing with
guys like me. :) Folks from an engineering/system design/
architecture background, who treat this "cyber-crime" as
a serious problem that can be managed effectively using
engineering and design disciplines. Security is nothing
more than extending a failure analysis forward into a
predictive failure analysis model (formally or ad hoc) and
checking your implementations against past experiences
of failure. This is exactly the same design discipline that
civil engineers use when they build bridges: they
understand that a bridge will be exposed to wind, rain,
corrosives dropped on the road surface, vibration, metal
fatigue, etc. These failure paradigms are extrapolated from
past experience and taxonomized as aspects of a discipline.
You will not find a bridge designer who has not heard of
the Tacoma Narrows Bridge or metal fatigue or rust-oleum
(or stainless steel, for that matter!). Software is only on
the verge of becoming an engineering discipline, but
eventually, I hope, you will not find a designer of mission
critical software who does not know what a buffer overflow
is, or who does not understand component testing.
So, as much as you may not like it, there are plenty of
folks out there who understand that software security is a
design and architecture issue - not a process of slapping
band-aids on bad code until it's, well, bad code covered
with band-aids. What you'll find is that engineers who
understand engineering discipline find bug-hunting to be
an utterly boring process; well-designed and implemented
systems don't need "pen testers" - they cross-check
themselves. The only reason the industry is in the
horrible condition it's in today is because the vast
majority of code that's been fielded to date is crap. That
will have to change. And when it does, "pen testers"
will become peons in the quality assurance department.
>These new tougher computer laws will, in my opinion, have a tremendous
>negative impact in the defense of these "advanced societies". It almost
>feels to me like we're destroying ourselves.
I think you're kidding yourself.
>I know what you're thinking. You can learn about security attacks by
>setting up you're own controlled environment and attacking it yourself.
>Well, what I say is that this approach *does* certainly make you a
>better attacker, but nothing can be compared to attacking systems in
>real world scenarios.
Who cares if someone is a good attacker?
Let me try that differently. What is a "good attacker"? (By good
I assume you mean "skilled") A skilled attacker is someone
who has internalized a set of failure analysis of past failures, and
can forward-project those failures (using imagination) and
hypothesize instances of those failures into the future. Put
concretely - a skilled attacker understands that there are
buffer overruns, and has a good grasp of where they usually
occur, and laboriously examines software to see if the usual
bugs are in the usual places. This is a process that, if the
code was developed under a design discipline, would be
replaced trivially with a process of code-review and unit
testing (a little design modularization wouldn't hurt, either!).
But it's not actually rocket science or even interesting.
What's so skilled about sitting with some commercial
app and single-stepping until you get to a place where
it does network I/O, then reviewing the surrounding code
to see if there's a memory size error? (Hi, David!) Maybe
YOU think that's security wizardry but, to me, that's
the most boring clunch-work on earth. It's only interesting
because right now there's a shockingly huge amount of
bad code being sold and the target space for the
"hit space bar all night, find a bug, and pimp a
vulnerability" crowd to play with.
>Now, I personally know many pentesters and I can say that most of them
>*do* cross the line sometimes when doing online exploration in their own
>free time. However, these guys would *never* harm anything or leak any
>sensitive information to the public. That's because they love what they
>do, and have very strong ethical values when it comes to privacy.
Your understanding of ethics appears to be shakier than
your understanding of software engineering.
You're trying to excuse the trespasser that "never harms anything"
from having done wrong, but you cannot do that because you never
asked the victim's opinion. Indeed, the very fact that the victim
may have already gone to expense to try to prevent the trespass
merely means that the trespasser has added insult to injury! The
trespasser is still morally culpable.
Suppose a property owner has a 250 acre property they want to
keep private. After all, it's theirs, they have the right to want to keep
it private, and they want to enjoy it without having strangers wandering
about in their land. So our property owner spends $400 on 500 "No
Trespassing" signs and nails and spends 2 days nailing signs to
trees around the perimeter of their property. Now, a stranger
comes along, ignores the signs, becomes a trespasser, and leaves.
Has the property owner been wronged? Absolutely. Whether the
trespasser "never harmed anything" or not, they ignored the
property owner's moral rights, and additionally the property owner
has now spent 2 days nailing and $400 on signs - and it was
wasted. Obviously, you can't assign the entire cost of the signs
and the wasted time to a single trespasser, but it's certainly
insult to injury. The trespasser has no moral right to claim that
their assessment of "not harming anything" superceeds the
property owner's -- after all, by placing "No Trespassing" signs,
the property owner has explicitly informed the trespasser that
trespass in and of itself is harmful. This is why trespassing is a
crime, and aggravated trespass is a felony (aggravated
trespass would be if the trespasser decided to tear down a
few of the signs, just to show that stupid land-owner that he
"knows better" and "means no harm")
Obviously, you can map these values to IP networks - the
fact that a system has ANY form of security enabled AT ALL
is analogous to a "No Trespassing" sign. Though I question
the moral underpinnings of an Internet society in which the
prospective victim has to put a "NO STEALING" sign on
their car and a "NO RAPING" sign on their backside and
a "NO SPYING" sign in their window, and a "NO WIRETAPPING"
sign on their phone, etc.
In other words, in the real world, property rights are an
ingrained concept in virtually all societies. The movement of
"advanced societies" to tighted up cybercrime laws is
simply a reflection of those advanced societies rationally
extending the moral values of property rights into cyber-space.
The view you espouse, in which you arrogate to yourself
the right to decide what constitutes harmless trespass
versus harmful trespass -- that's a view that probably will
not last very long, IF IT EVER EXISTED AT ALL. Let me be
frank with you, since you seem to want to be an apologist
for the cyber-trespasser: the fact that $6 billion annually is
spent on firewalls, IDS, antispyware, antivirus, vulnerability
management, etc -- is a VERY LOUD STATEMENT that
society as a whole DOES NOT APPROVE OF CYBER
TRESPASSING. On the internet, virtually every tree has a
"No trespassing" sign nailed to it. You choose to pretend
not to see it at your own risk.
>I would say that most pentesters are "grey hats", rather than "white
>hats".
I agree with you. I would say that most pentesters are
failed security analysts who do not understand engineering
discipline and have chosen to engage in the war of band-aids
instead of learning how to build correct systems. And then
there are the pentesters who really are cybertrespassers
at heart, who have found a financial and moral justification
for doing something for money that they'd otherwise do
anyhow, for free, in the wee hours of the night.
Put differently: either way you slice it, pentesters aren't
worth a bucket of warm spit as far as I am concerned.
>In fact, I believe that the terms white and black hat are completely artificial because we all have different sides. The human
>mind is not binary, like black or white, it's something fuzzy instead,
>with many layers. The terms white and black hat were, in my opinion,
>created by business people to point out who the "good guys" and "bad
>buys" are.
I belive that you are seeing to create moral ambiguity because
if you don't have that ethical grey area to work in, you've lost
your playground.
You're right, though - black/white hat is probably poor
terminology. As a property owner (both in the real world
and in cyberspace) there are only two kinds of people on
my land:
- Invited Guests
- Trespassers
There is no room there for moral ambiguity.
>If I was the technical director of a computer security testing company I
>would try to find pentesters that are not malicious, but that do cross
>the line sometimes but at the same time, know when it's a good time to
>stop exploring.
I am glad you are not the technical director of a computer
security testing company then. In fact, I hope you are not
employed in the field of computer security at all, if you
would be trying to recruit, as professionals, people who
"cross the line." In fact I am extremely glad that you're
also not the director of a day-care facility, and that you
don't want to hire employees that "occasionally grope
the children" (but not TOO much!) or that you aren't the
director of a bank, who'd want to hire tellers that
"only occasionally pocket (small denomination) bills."
>If you hire someone that has never broken into a system, this guy will
>not be able to produce valuable reports for customers because he will
>not be able to find vulnerabilities that can't be found running a
>scanner.
If you're trying to understand the security properties of a
system by breaking into it, you not producing valuable
reports, anyhow. All you are doing is telling them where
to put the next band-aid.
>In summary, I'd like governments of the world to rethink their strategy
>when fighting computer crime. Extremism never worked and never will.
In summary; the views you expressed typify, to me, the negative
effect of accepting a moral grey area into our profession. You speak
of ethics and, in the next breath, you show that you don't even know
what ethics ARE. You speak of learning, and, in the next breath, you
show that you don't understand how to apply learning in a disciplined
and predictable manner.
>Remember, many of today's script kiddies will be the infosec
>professionals of tomorrow.
Ironically, I am the person who first coined the expression
"script kiddie" (back in 1994 I think it was...) - but I originally
used the term not to apply to the ankle-biter cybercriminals,
I was using the term "script kiddy" to describe the first-generation
security auditors! Back in the early 90's, when the "big 6"
first got into the security audit game, they used to send these
ignorami right out of college, with checklists, who'd go around
customer sites looking to see if the /etc/passwd file on
Windows machines had the correct permissions - and they'd
write a report saying that the "passwd file is missing!"
In the sense that I originally coined the expression "script
kiddy" I was referring to those of you who now proudly call
yourselves "pentesters"
Ironic, huh?
mjr.
Powered by blists - more mailing lists