lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date: Tue, 9 Nov 2004 12:05:43 +0200
From: Heikki Kortti <hkortti@...enomicon.com>
To: bugtraq@...urityfocus.com
Subject: Re: Update: Web browsers - a mini-farce (MSIE gives in)



It's been interesting to follow the discussion after Michal's original
submission. Those readers who are familiar with the PROTOS style of
testing and the results of that research will know how effective
protocol-based invalid input testing has proven to be in catching at
least a majority of common implementation flaws. The PROTOS approach
calls this "robustness testing" as per the following IEEE definition:

"robustness. The degree to which a system or component can function
correctly in the presence of invalid inputs or stressful environmental
conditions." (IEEE Standard Glossary of Software Engineering
Terminology)

Of course, the idea of automated testing with invalid inputs harkens
back all the way to fuzz testing. Whereas fuzz testing typically
employs completely or very nearly random inputs, protocol-based
robustness testing focuses on modifying otherwise valid protocol data
in ways that have been proven during the last 20-30 years to most
likely trigger flaws in implementations. The common bugs that are
found in this type of testing would inevitably be found later on by
various auditors and guys with interesting-coloured hats, so this just
helps in catching some of these vulns early on.

Needless to say the idea can be applied besides network protocols to
almost any kind of structured data, including file formats. In
addition to HTML, image and audio/video formats lend themselves well
to this kind of testing. File format implementations are still not
usually implemented with malicious attacks in mind, which means that
they are trivial to break in most cases. (As compared with network
protocols, although the boundary between the two has steadily become
hazier in the last 10 years due to obvious reasons.)

Since invalid input space is quite large even for simple protocols and
data formats, automated test generation is a must, as is careful human
design for the initial parameters of the used invalid data.
Experience of past software vulnerabilities and a knowledge of common
programming errors certainly helps in this.

Disclaimer: I work for Codenomicon, which develops and sells test
tools for this kind of testing. Codenomicon is a spin-off from the
PROTOS research group OUSPG, and we sponsor them by supplying them
with our test generation framework.

However, I am not trying to sell anything, simply lauding Michal's
excellent foray into a topic I myself am passionate about. Automated
robustness testing is hardly a panacea, but it does help in catching a
lot of the trivial input-based errors. This is especially valuable if
the testing can be done even before the software is released. Mozilla
developers seemed to "get it", since I understood they very rapidly
integrated these tests to their regular (regression) testing cycle.

-- 
Heikki Kortti


Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ