lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <20040121013313.SELN337742.fep03-mail.bloor.is.net.cable.rogers.com@BillDell>
From: full-disclosure at royds.net (Bill Royds)
Subject: Old school applications on the Internet (was Anti-MS drivel)

 What you describe is actually one of the reasons for some of the flaws in
MS software. It was built with the assumption that the only machines on the
network that it would communicate with were other MS boxes. The network was
a LAN only ( why it was called LAN Manager). The flaw that allowed
Msblaster/Nachia last summer was one of these things. The software was
written assuming the an RPC caller process would "play nice" and never send
an invalid NETBIOS name for any machine. MS written software never did. So
the called server never checked on name size since there was no need to and
the caller always checked :-). When a worm ignored these agreements, we got
MSBlaster. Malware does not "play nice". It does not even play badly
accidentally. It deliberately tries to do damage. So the stakes for writing
software are much much higher than they were when DOS/Windows was originally
written. Windows 95 was written with the assumption that it would only be
used in a LAN. Bill G's belated discovery of the Internet (and the bolt-on
TCP/IP stack for Windows 95) has led to much of our security nightmare.
Windows 9x was never designed for an open network and the requirement to
have Windows NT/2000/XP/2003 compatible with the older versions has
prevented these from truly being Internet aware at the core.
 Most old school software and its QA was attempting to ensure that the
software produced correct results from valid input. But making a program
work is the easy part. The hard part is making it NOT work in a secure
manner. That is, when faced with invalid input, it should not process it as
if it were valid input. That is what a true security researcher does. He/She
finds what input is accepted by a program when it shouldn't and determines
what are the consequences of that input. True QA and testing
compartmentalizes all possible input so that one can be assured that invalid
input will be safely rejected or at least sanitized. One can never assume
that the arguments to any routine are valid, If they ever come from
"outside" they need to be treated as tainted.
   In a old batch mainframe environment, rejecting bad input often just
means correct the data and try over. In an online continuous transaction
processing  environment (which Internet servers are), one can't often just
reject the bad input. One has to unravel all the good input that preceded it
and that is dependent on the bad input to ensure that the internal state of
your processor is still valid (the database is not corrupted, for instance).
This means that QA is immensely harder than when these systems were written.
   But people are attaching these old systems to the modern Internet without
taking these differences into account. A system that kept account
information unencrypted since it was only going to travel over a closed LAN,
is not going to cut being connected to a web server that connects to the
open Internet. There is no such thing as a LAN anymore. Once you allow your
users to connect to the Internet, all the old assumptions are invalid. You
really should start over gain and redesign your applications with the new
requirements. 

-----Original Message-----
From: full-disclosure-admin@...ts.netsys.com
[mailto:full-disclosure-admin@...ts.netsys.com] On Behalf Of Michal Zalewski
Sent: January 20, 2004 4:03 PM
To: yossarian
Cc: [Full Disclosure]
Subject: Re: [Full-Disclosure] Anti-MS drivel

<snip>

These applications usually undergo much more rigorous QA, and this
elliminates most of basic reliability issues that occur in reasonably
"normal"  working conditions - but the most common type of QA does almost
nothing to find problems that will surface only when the application poked
with a stick by a sufficiently skilled attacker. Old school development
and quality assurance practices, and developers with mindsets locked on
the network security it used to be in late '80s or so, are far more
prevalent in these environments. And it really really shows.

The relatively low number of vulnerabilities found in those products can
be contributed to a couple of basic factors:

1) Average Joe Hacker does not have access to prohibitively expensive
   or highly specialized systems used in high-profile corporations.
   He does have his Windows and Linux partition, though, maybe even
   a Solaris box somewhere, and can sometimes get ahold of Oracle.
   Enterprise applications for VMS or OS/400, doubtly so. This holds true
   both for amateur researchers, and for many "vulnerability research"
   shops, too - they simply do not have the budget (or incentive) to
   do it.

2) Joseph Hacker who happens to be working in a corporation that has such
   a platform is usually limited in how far he can experiment with it
   while playing it safe, especially if it is a production system "ever
   since", and creating a dedicated testbed with appropriate data feeds
   would be overly complex or time-consuming.

3) Even if Joseph finds a flaw, he is expected to work with the vendor
   to protect his company's assets, instead of disclosing a problem
   (otherwise, a swift retaliation from both the vendor and his
   now ex-employer would ensue). He does not have the freedom
   Joe enjoys.

   Moreover, sometimes vendors are extremely non-cooperative, and there
   is simply no other choice for this platform that could be used
   as a replacement without major transition expenses and problems.

4) The public interest in this type of vulnerabilities is marginal.
   Although some solutions may be popular in corporations, the systems
   usually do not face the Internet, and are seldom mentioned in the
   media. As such, there is very little incentive to disclose this
   type of stuff, as only a couple of folks are going to realize
   what you are talking about to start with.

Just my $.02.

-- 
------------------------- bash$ :(){ :|:&};: --
 Michal Zalewski * [http://lcamtuf.coredump.cx]
    Did you know that clones never use mirrors?
--------------------------- 2004-01-20 21:31 --

   http://lcamtuf.coredump.cx/photo/current/

_______________________________________________
Full-Disclosure - We believe in it.
Charter: http://lists.netsys.com/full-disclosure-charter.html


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ