[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <01b501c2a05e$c7e2e160$c71121c2@sharpuk.co.uk>
From: DaveHowe at gmx.co.uk (David Howe)
Subject: Security Industry Under Scrutiny: Part 3
at Tuesday, December 10, 2002 5:12 AM, sockz loves you
<sockz@...il.com> was seen to say:
>> If anyone has learnt anything in security over the years, it's that
>> "security through obscurity"
>> DOES NOT WORK
> plz explain why? so far all the explanations i've heard have been
> self- sealing arguments backed up only by simplified models and
> varying degrees of 'faith' in the security industry.
Because if your algo is secure, you don't NEED to rely on it being
secret for it to work. This doesn't mean that you need to show everyone
and plaster it all over a website - but you shouldn't care if someone
can see the code, and in fact, if you can get another (or as many as
possible) security professional(s) to look at your code, they may find
security faults you missed; as long as you don't give them your key (and
they can't break it) your system *should* remain secure.
Relying on your actual code being secret breaks the second someone
else gets a chance to look at it - and protecting an entire site from
read access, forever is not feasible. If all else fails, there is
always "thick wallet" cryptoanalysis - there is always *someone* onsite
who can be bribed to place a bug or swap a "failed" backup tape for a
fresh one before it hits the eraser/furnace/whatever.
In the early days of unix, even the *passwords* file on a server was
readable - it was understood it didn't matter, as it wasn't possible to
obtain the passwords from the file (later on, computing power reached
the point where it *was* possible to reach the passwords by brute force,
but then, it is almost always possible to break some passwords on a box
by brute force; I particularly like the reverse password approach
banking-site crackers use :)
> what others should i have included that were relative to the debate?
> i assumed that if i was describing the flow of information between
> whitehats and script kiddies, then i would not need to list any other
> adversaries because they would have been outside the scope of the
> email. perhaps i was wrong? then again you could mean here that
> fake-whitehats with fake-advisories are also kinds of adversaries? i
> am not clear on this.
I think what he is suggesting is that, regardless of your throttle of
information from whitehat to everyone (including the blackhats and
kiddiez) there are other channels of information available to those
whose hats are not spotless; the people who would lose the most
information if whitehat disclosures were removed entirely would be the
site admins and other whitehats.
>> The solution you present for secure computing, is indeed a purely
>> political scheme, and not a technological scheme, for the goal is
>> not the reduction of vulnerabilities, but _the reductions of
>> REPORTED of "security violations"_.
> that's correct and incorrect.
> the goal is to change the way vulnerabilities are reported. it isn't
> security through obscurity really, because a responsible security
> architect would be notifying the software vendor alone... and not the
> rest of the world. what i am calling for here is not an end to bug
> reports but a beginning of maturity and responsibility in the
> industry.
Unfortunately, this is the way it used to be.
Full disclosure isn't so much a tool to get vunerability information
into the hands of the deserving (and undeserving of course) as a tool to
achieve change; back before FD was common, literally hundreds of people
would report the same vunerability to the vendor, and be told they were
the only one, that it must be an isolated case, and that the next
release would fix it (of course it wouldn't, but that silenced the
complaints for a while). Now a *responsible* whitehat will alert the
vendor, but also let the vendor know exactly how long he is willing to
wait for a response before he dumps the information (with proof of
concept; no vendor before FD would even admit a bug existed unless you
gave them proof of concept) to every channel in reach. Name and shame
is a PR disaster, and it then makes financial sense for the vendor to do
something about it (and try their best to spin it positively; "look, we
had a bug and we have patched it before any harm could be done") rather
than face plummetting sales due to lack of consumer confidence. That
some vendors (a fair few in the early days and occasionally even now)
prefer to try and use their potential loss as a legal club to silence
the discoverer (while doing nothing to avert the eventual damage to
customers they aren't liable for) shows only that vendors can't be
trusted to do the right thing unless they are shamed into it.
It is sad that some whitehats are not responsible; In particular, ISS
in rushing out a vunerability alert (with full exploit code and a remote
shell) for IIS without telling MS about it at all (in order to promote
their own vunerability scanner) probably did more damage to the case for
"responsible disclosure" than all the script kiddiez combined.
>> BLACKHATS ALREADY KNOW AND HAVE THIS INFORMATION!
>> BLACKHATS DO NOT DISCLOSE!
> i think that it is unreasonable to suggest that everything that has
> been churned out on bugtraq in the last year was discovered by a
> blackhat.
no. let us assume there are four groups
1. whitehats working to find vunerabilities
2. vendors working to find vunerabilities
3. blackhats working to find vunerabilities
4. site maintainers looking for suspious-looking entries that might
indicate a known or unknown vunerability
The first group maintain full disclosure; they (if they are responsible)
notify the second group so that they can release a fix at the same time
as the disclosure (but note bruce's curve for actual vunerability over
time)
The second group are interested purely in PR spin - they will patch as
unobtrusively as possible, often leaving a patch for months until it can
be rolled into an unrelated issue (such as a protocol enhancement or
improved driver) and patched without disturbing the peaceful contented
sleep of their customers
The third group work just as diligently as the first group; they notify
only each other (so have a slight head start over the first group, but
only slight as they trade knowledge, not give it freely, so the trades
are begrudged and never contain the most valuable information). They
also benefit from the first group's information, but recognise anything
that widely publicised is not valuable (in a trading sense) to them, and
will be watched for in other code (so may even devalue information they
previously considered valuable)
The last group will use the information from the first group to prime
them on what sorts of exploits to be looking for; this is one of the few
groups where actually finding nothing is a victory (vendor teams may
find themselves downsized if they find nothing :) provided there is
actually nothing there to find. Finding nothing while there *is*
something to find though may well get them downsized, so they should be
as diligent as possible keeping up with the "state of the art". They may
also drop potential vunerability information into the same channels that
the whitehats use, so that smarter minds can pick it over and possibly
find something they missed.
(in case you are wondering, I consider myself in group 4; this should
let you know at least what my vested interests are :)
now let us assume we remove group 1 entirely; if they don't have any
channels, they become equivilent to group 2, just not paid to do it -
and their numbers will drop drastically.
group 1 will no longer exist
group 2 will find they have less help; not only do they have less
information from whitehats directly to them, but *comparative*
information - information on other vendor's problems that might be in
their own products - is also lost
group 3 will find they have less competition; not only does every
exploit they find have a longer lifetime (due to lack of reporting) but
they can apply the same principles cross-vendor and exploit several
different systems at once; odds are good that any patches that *are*
issued are issued blind (there is no fanfare and they are rolled into
"functional" patches) and therefore there is no incentive for sysadmins
to install them; after all, a working system may develop problems if you
apply a patch; if it ain't broke.....
group 4 will be more or less isolated; there will be no information they
can use to educate them on what "suspicous" activity looks like; vendors
will actively downplay any problems they find, some random future patch
*may* repair the vunerability, or simply address their individual case
without addressing a more general problem; their managers will rely on
vendor glossies to evaluate security strength (ok, they do that anyhow
:) and they won't know a bad decision from a good one; compromised
systems may *remain* compromised for years, and when the compromise is
discovered, the organisation will downsize the "sacrificial" technical
staff member, then pay some vendor to install a completely different
vunerable package.
> whitehats find bugs to make themselves famous, make money,
> score advisory brownie points, and those bugs can be *anything*. i
> dunno about you but the only bugs i've really sought after are the
> ones that will help me achieve my individual goals.
Its how the game is played - whitehat reporting (like open source
development to a great extent) is a reputation game; those who are
publically good at it gain respect (which may or may not spill over into
improved work opportunities, but in any case are worth having just in
themselves) at the expense of vendors who suffer reputation loss (and
hence profit loss) only salvagable by expensive security patching they
would not otherwise consider economic to perform.
Powered by blists - more mailing lists