lists.openwall.net   lists  /  announce  owl-users  owl-dev  john-users  john-dev  passwdqc-users  yescrypt  popa3d-users  /  oss-security  kernel-hardening  musl  sabotage  tlsify  passwords  /  crypt-dev  xvendor  /  Bugtraq  Full-Disclosure  linux-kernel  linux-netdev  linux-ext4  linux-hardening  linux-cve-announce  PHC 
Open Source and information security mailing list archives
 
Hash Suite: Windows password security audit tool. GUI, reports in PDF.
[<prev] [next>] [day] [month] [year] [list]
Date: Thu, 23 Dec 2004 14:06:27 -0500
From: "Palmer, Paul (ISSAtlanta)" <PPalmer@....net>
To: "Jonathan Rockway" <jrockw2@....edu>, <bugtraq@...urityfocus.com>
Subject: RE: DJB's students release 44 *nix software vulnerability advisories


Jonathan,

You touch on a couple of topics which are worthy of exploring further.

First, lets talk about the nature of risk. It is not black and white. It
is a continuum, very difficult to measure exactly but can usually be
estimated. In the security profession you quickly learn that not only
can you not eliminate risk, but in many cases it doesn't even make sense
to try. Your job is to manage and reduce risk as cost-effectively as
possible.

I agree that full disclosure serves a purpose. However, I disagree that
_immediate_ disclosure is the best solution because it imposes too much
risk on the end-users (the victims). In your example, you had the option
of disabling SSH until a patch was available. Others probably could not
pursue that course of action as the opportunity cost would have been too
high. That is, there are many enterprises in this world that absolutely
depend on SSH to function efficiently. Many probably didn't disable SSH
even when they knew there was an unpatched vulnerability. Were they
lazy? Were they just stupid? I doubt it. They most likely performed a
risk assessment even if it was made unconsciously. That is, they
estimated the potential cost to their organizations of leaving SSH
active until a patch was available against the lost revenue and/or
increased cost of doing business they would incur if they pulled SSH
until a patch was available. You did the same thing. However, in your
case the risk of leaving SSH active was probably much higher than the
cost of not having it.

What immediate disclosure does is immediately increases the total risk
incurred by the users of the vulnerable software until a patch becomes
available. Contrast this against what is currently generally accepted
practice: notify the vendor; give them a reasonable amount of time to
fix the immediate problem; give them a reasonable amount of time to
audit for any similar problems; give them a reasonable amount of time to
verify that they haven't introduced any other problems in the course of
fixing the current one; post your advisory after either a patch is
available or a reasonable time has expired. This practice greatly
reduces the risk to the end users (remember they ARE the victims here).
The vendor is still "punished", maybe not as harshly as you would
prefer, but these announcements do still take their toll. The key here
is that no unnecessary collateral damage is imposed upon the victims in
the interest of making sure that the "vendors get what is coming to
them"...

Now lets talk about the LAZY humans and software vendors. They aren't
any more lazy than you are. They are arguably quite efficient (as a
group) with their limited resources however. Perfectly secure software
has a development cost. In fact, it is a VERY high development cost.
This cost would have to be passed on to the customers and now we get
back into our discussion of risk and opportunity cost. If a vendor
provided a commercial version of SSH that they have made flawless for
$10,000-100,000 a copy, would you buy it? (Why so much? The cost of
development increases and must be passed on to the customer base. As the
cost goes up the potential customer base shrinks and therefore, the cost
burden per remaining customer also must increase. A very non-linear
relationship! Granted, this greatly simplifies a very complex subject in
its own right, but it is accurate enough for this discussion) I doubt
that you would. It wouldn't be very cost effective of you. Your personal
risk in dealing with the occasional SSH bug that occurs has an amortized
cost far less than $10,000 (the minimum opportunity cost of being more
secure in this scenario). You still use the publicly available SSH even
after its history of bugs don't you. Even when clearly presented with
evidence that the software has a history flaws (and likely has other as
yet undiscovered ones), you still do not choose a very significantly
more secure alternative of removing your system from the Internet. Does
that make you lazy, stupid, or smart/efficient/practical?

Why don't the commercial vendors of SSH produce a guaranteed flawless
product? Is it because they are full of lazy programmers or greedy
executives? I do not think so. I would argue that the marketplace
doesn't currently support it. Customers do want more secure products,
but they just are not willing to pay enough extra for it to support the
development costs.
 
Paul

-----Original Message-----
From: Jonathan Rockway [mailto:jrockw2@....edu] 
Sent: Wednesday, December 22, 2004 1:06 AM
To: bugtraq@...urityfocus.com
Subject: Re: DJB's students release 44 *nix software vulnerability
advisories


On 21 Dec 2004, at 3:22 PM, laffer1 wrote:

> As for the other comments in this thread about telling the vendor
> early, I personally feel it helps users if the vendor has a few days 
> to look at the hole and devise a patch BEFORE everyone on the planet 
> knows about it.  You punish users of software in addition to vendors.

> All software has a security problem of one kind or another, and its 
> silly to think that a perfect application will every be written.

Why are users using insecure software?  Or rather, why do users accept 
the fact that their software may be insecure?

Besides, full disclosure helps the users too.

I remember a few years ago when the major SSH remote hole was found.  I 
read about it on slashdot between classes.  Since there was no patch 
yet, but there was an exploit, I ssh'd into my home machine and turned 
off ssh.  Even if there wasn't an exploit, I wasn't going to leave a 
vulnerable service up and running.  By the time I got home, sshd was 
patched, and I installed that patch.  If nobody had disclosed the 
threat to me, I could have been compromised.  But, because I was 
notified, I was able to take preventative measures.

The sooner someone told me about the problem, the sooner I was able to 
protect myself from the threat.

Full disclosure is important because vendors will drag their feet if 
they're the only ones who know about it.  Imagine you are a student and 
have a paper due "whenever".  When are you going to write the paper?  
Today?  No, you'll do it later.  After all, nothing bad happens if you 
don't do it, and not doing it is much easier than doing it.

Humans and software vendors are LAZY.  If there's no reason to do 
something, they won't do it.  Full disclosure forces the issue and puts 
everything out in the open.  No "it'll be ready in 90 days" stalling.  
It will be ready NOW or users will look to more secure alternatives.   
They will make the first move and choose a program that doesn't have 
security problems to begin with.  This is better for everyone (well, 
except people with financial interest in selling crappy software.  that 
doesn't particularly upset me, though.)

I do have more sympathy for open source developers.  They are not 
trying to profit from the security of their software, so I think they 
deserve a little leeway.  BUT, fixes are usually contributed by outside 
experts.  The experts can't just guess "Oh, I bet Person A of the NASM 
project needs help with security problems.  I'll send him an email and 
ask if he needs help."  They need to know about the vulnerability 
before they can attempt to fix it.   If they're reading a full 
disclosure mailing list, they'll know about the problem. Then they can 
code up a fix, email it to the author, and bang, it's fixed.   It all 
starts with full disclosure, though.   (Without full disclosure, the 
author of the software would be on his own to fix it.  With full 
disclosure, someone more experienced can help him out.  That's a Good 
Thing and is what makes the Open Source movement work.)

Full disclosure makes the Internet more secure.  It forces vendors to 
fix their broken software, and it forces users to update their broken 
software.  Less broken-ness is good for everybody.

If you disagree, you are probably writing broken software and are 
afraid of what your users will do to you when they find out about it.  
Good luck with that, and remember: don't shoot the messenger.  If you 
wrote the buggy code, you have only yourself to blame.

Regards,
-- 
Jonathan Rockway <jrockw2@....edu>



Powered by blists - more mailing lists

Powered by Openwall GNU/*/Linux Powered by OpenVZ