lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite for Android: free password hash cracker in your pocket
[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Date: Mon, 22 May 2006 17:39:36 -0700
From: Crispin Cowan <crispin@...ell.com>
To: Fabian Becker <neonomicus@....de>
Cc: bugtraq@...urityfocus.com
Subject: Re: How secure is software X?


Fabian Becker wrote:
> in my opinion a software can either be secure or not secure. 
> I think it's a bit like a woman cannot be "a bit pregnant".
>   
The problem with this view is that it ignores both time and differential
knowledge: who knows something, and when do they know it?

While it is true that a given block of bits is either vulnerable (has
one or more exploitable defects) or secure (has zero exploitable
defects) this is only relevant in the case of perfect omniscience: you
know absolutely everything about that instance of the software.

But knowing everything is improbable. Software is complex, and there
likely isn't enough time to explore all possible angles of attack. A
trivial counter-example is printf format string attacks: they were
unknown prior to 2000, when the attack class was disclosed, and then
there were zillions of fresh vulnerabilities.

So a discussion of relative vulnerability certainly is relevant to the
practical world. Relative vulnerability is the question "what is the
*work factor* of finding a vulnerability in this piece of software?" A
program that shows vulnerabilities 10 seconds into a fuzz scan is
extremely vulnerable. A program that shows no vulnerabilities after
months or years of scrutiny (qmail & postfix) is highly secure, even
though it is probable that it contains *some* vulnerability.

My Sardonix project has been mentioned in this thread. Sardonix
attempted to measure the security of programs based on a record of the
skills of people who had audited it, and conversely measured the
auditors based on the programs they had audited vs. the quality of their
audits. Sardonix failed due to lack of participation, likely because it
asked far too much from the auditors.

What is needed for a more successful project is a lighter-weight way to
record who has audited a program. The standard that Litchfield proposed
could become that: similar to CDDB, it would just log who has audited
the program, and users can make whatever they want of that record.

Crispin
-- 
Crispin Cowan, Ph.D.                      http://crispincowan.com/~crispin/
Director of Software Engineering, Novell  http://novell.com



Powered by blists - more mailing lists