lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [day] [month] [year] [list]
Date: Thu Feb 23 22:17:31 2006
From: mattmurphy at kc.rr.com (Matthew Murphy)
Subject: Re: How hackers cause damage... was
	Vulnerabilites in new laws on computer hacking

-----BEGIN PGP SIGNED MESSAGE-----
Hash: RIPEMD160

And people say _I_ think in black and white...?

Jason Coombs wrote:
> We must build computer systems that separate the act of installing and
> executing software from the act of depositing data on read/write media.
> 
> Executable code must not be stored on read/write media. At least not the
> same media to which data is written, and access to write data to
> software storage must not be possible through the execution of software;
> at least not software executing on the same CPU as already-installed
> software.
> 
> Our CPUs need a mechanism to verify that the machine code instructions
> being executed have been previously authorized for execution by the CPU,
> i.e. the machine code is part of software that has been purposefully
> installed to a protected software storage separate (logically, at least,
> and both physically and logically separated at best) through actions
> that could not have been simulated or duplicated by the execution of
> machine code at runtime on the system's primary CPU.
> 
> The worst-case scenario of 'repair' and 'recovery' from any intrusion
> event should be verification of the integrity of protected storage,
> restore from backup of data storage, analysis of data processing and
> network traffic logs to ascertain the mode of intrusion (if possible)
> and reboot of the affected box with a staged reintroduction of the
> services that box previously provided (if you just re-launch all of the
> services being exposed by the box then it is just as vulnerable as
> before to whatever attack resulted in the intrusion, so you start from
> the most-locked-down condition and add services one at a time,
> monitoring for a period of time at each step).
[snip remainder of delusional plan for world peace and secure computing]

There are several fundamental problems with this statement.  First of
all, not all compromises are actual compromises of the systems
themselves.  Not all exploits achieve or require the introduction of
malicious, unauthorized code.  For instance, take a web server that
offers up confidential information when faced with some type of path
traversal sequence (IIS extended unicode, anyone?).  The code that web
server is executing is 100% authorized, because it all lies within an
installed component.  The problem is that the authorized code is flawed
in such a way that it permits access to *data* that it should not.  That
could be something like the social security numbers stored in your flat
file database, your site's password file, etc.

That attack just succeeded... potentially compromising millions of
credit card numbers, thousands of user accounts or [insert apocalypse]
_without ever relying on the ability to introduce foreign code_.

Further, this "dual-channel" system (for lack of a better term) that you
propose would mean fundamentally redesigning computing in a way that is
not readily apparent.  Fine, store all your executable code on a second
drive and bar all writes to the executable volume from within the
protected system.  The basic methods of doing this are there with file
system ACLs on a high level and read-only/tamper-resistant media on a
low-level.  Granted, it would require a sizable amount of
reconfiguration to implement with today's PC operating systems, but it
is possible.

That problem is solved -- preventing the alteration of the execution
environment from a base install.  However, the very mythical notion
espoused by those who consider their code "secure" is the idea that the
base install is safe.  In the majority of cases, this is not anywhere
near true.  So, there's still the possibility of buffer overrun attacks
and other "code injection" scenarios that rely on the ability to alter
some portion of the system's primary memory.

You can make a big stride against that by isolating the execution
environment from unsanitized run-time information (data) in memory
either using flagging (NX/XD) or a more intrusive physical isolation
(multiple buses, different RAM locations for code/data, etc.)

However, this fails to stop the possibilities for the execution of
TRUSTED code in an unsafe environment.  A classic example of this is
what is often mis-labeled "return-into-libc" attacks.  A vulnerability
in software is exploited such that normally safe (and ostensibly
trusted) code is executed in an unsafe context.  As far as I can tell,
the isolation of data from "trusted" code would not solve this.  For
that to be remedied, you'd fundamentally have to limit the TCB to code
that could be trusted to execute in *ANY* environment.  The problem of
security in the first place is that such code is non-existent in today's
world and not achievable in the forseeable future, due to inherent flaws
and imperfections in the logic of the human species designing the crap.

So, this "solution" that you propose fails to completely solve one of
the most well-known security threats of our time: buffer overruns.  In
order to solve that one class of vulnerabilities, you must not only
isolate the actual code from data but must also prevent the state of the
"trusted" code base from being damaged by errors intrinsic to the
trusted code.  In essence, you have to prevent the trusted code base
from opening itself to attack.  When you identify the solution for that,
by all means, let us know!

A related avenue for malware also exists: software automation.  From
macro languages in word processing packages (a joke) to interpretive
script languages (a serious use), it's everywhere.  Take, for instance,
recent vulnerabilities in PHP applications (all 500,000 of them).  Many
of these allow malicious users to introduce unsafe *script* into the
environment of the system that is susceptible to them.  However, your
hardware-based enforcement fails in this scenario because PHP is a true
interpreted language.  *NO ADDITIONAL CODE* is ever assembled into
memory when a PHP file is processed.  There is a primitive p-code that
is used by the interpreter, however, all of the code that the CPU
_actually executes_ in this case is located in the original "trusted"
PHP binaries (interpreter, sapi, extensions, etc.).

So, by your standard, PHP's very existence in the TCB would be an
irreparable vulnerability, because your defensive posture depends
entirely on code not doing bad things when it's faced with hostile data.
 After all, the p-code used by the interpreter when it processes that
PHP worm is just data and it triggers many of the same code paths you
use when communicating with other entities (a database server, for
instance).

Your "trusted" software base also seems fundamentally incompatible with
JIT approaches like Java and .NET which attempt (with a debatable degree
of effectiveness) to augment the hardware by providing additional
safeguards, with the inherent recognition of the fact that the code
they're executing simply *CANNOT BE FULLY TRUSTED*.

Also, you fail to address the root of the malware problem in today's
world, which is what defines "trusted code".  How does one authorize the
inclusion of code into a TCB and further, how does the TCB protect
itself from the occasional user error of allowing untrusted code to be
introduced?  Perhaps two of the biggest problems in today's world are a
lack of information about the trustworthiness of code and a lack of
ability on the part of system users to interpret that information when
it is made available.  Even if we establish a much better framework for
limiting code to trusted code, the issue of determining trust is still
there in force.

And if you think you're going to do away with e-mail attachments, don't
get me started...

Trust is not an absolute term... unless you *want* to be a victim.
There's a reason why this fallacious approach to software security
hasn't been adopted as yet.  Here's a hint Jason, it's not because
software developers enjoy seeing their users owned by the latest buffer
overrun exploit.  It's because the concept just doesn't work.

As for your "recovery" idea -- inherent in it is the destruction of the
untrusted portion of the system (the data) in order to return the
trusted portion (the code) to its default state.  That is, today, the
greatest loss of PC recovery -- lost data.  Sure, reinstalling software
is a pain, but it's not the ongoing cost that data loss is.

With your proposal as well, software "reinstallation" is simply shifted
from being a restorative process (replacing the system's contents) to an
investigative one.  You assume that it is the software -- not the
configuration of it or the user of it -- that introduced the
vulnerability.  In most cases, that falsehood creates a level of
uncertainty at least equal to that created by the "flatten and rebuild"
approach that today's professionals practice.

Simply restoring software gradually has a near-zero detection chance
with regards to the original attack.  What idiot is going to be stupid
enough to hit me *a second time* via the same vulnerability *immediately
after* I reenable a vulnerable piece of software... and thereby give me
a significant insight into the "keys of the kingdom"?  If you answered
something like "a preschooler", congratulations.  You're dead on.

You might be right about one thing: the fact that software
vulnerabilities come from bad business decisions.  If, that is, you
believe in a creator and you believe that his/her/its decision to
produce human beings with slight intellectual flaws was a business
decision.  In that case, I'd agree with you, but I'd be burned at the
stake as a heretic.

Sure, more of the vulnerabilities could be eliminated by better QA, but
today's world is one of minimizing impact as well as reducing
vulnerability itself.  Vulnerabilities originate from the same cause as
traffic accidents, train derailments, and on and on.  That cause?  The
fact that people make major mistakes.

It's time to pull off the covers and wake up.  You're on earth now...
and here... we humans haven't figured out how to write perfect code.  If
we had, would somebody be paying you or I to figure out what was wrong
with what they'd written?  I don't think so.

- --
"Social Darwinism: Try to make something idiot-proof,
nature will provide you with a better idiot."

                                -- Michael Holstein

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2 (MingW32)
Comment: http://pgp.mit.edu:11371/pks/lookup?op=get&search=0xB5444D38

iD8DBQFD/jSlfp4vUrVETTgRA+aCAKCYvWXYloSzTHL/+MfoCXJlJldgbACgnn41
fhhlSv6ytAc05Oas0Zo13mw=
=2sNc
-----END PGP SIGNATURE-----
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/x-pkcs7-signature
Size: 3436 bytes
Desc: S/MIME Cryptographic Signature
Url : http://lists.grok.org.uk/pipermail/full-disclosure/attachments/20060223/096df25c/smime.bin

Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ