[<prev] [next>] [<thread-prev] [thread-next>] [day] [month] [year] [list]
Message-ID: <96B32186-53DF-11D9-88EF-000D932F57A2@uic.edu>
Date: Wed, 22 Dec 2004 00:06:15 -0600
From: Jonathan Rockway <jrockw2@....edu>
To: bugtraq@...urityfocus.com
Subject: Re: DJB's students release 44 *nix software vulnerability advisories
On 21 Dec 2004, at 3:22 PM, laffer1 wrote:
> As for the other comments in this thread about telling the vendor
> early, I personally feel it helps users if the vendor has a few days
> to look at the hole and devise a patch BEFORE everyone on the planet
> knows about it. You punish users of software in addition to vendors.
> All software has a security problem of one kind or another, and its
> silly to think that a perfect application will every be written.
Why are users using insecure software? Or rather, why do users accept
the fact that their software may be insecure?
Besides, full disclosure helps the users too.
I remember a few years ago when the major SSH remote hole was found. I
read about it on slashdot between classes. Since there was no patch
yet, but there was an exploit, I ssh'd into my home machine and turned
off ssh. Even if there wasn't an exploit, I wasn't going to leave a
vulnerable service up and running. By the time I got home, sshd was
patched, and I installed that patch. If nobody had disclosed the
threat to me, I could have been compromised. But, because I was
notified, I was able to take preventative measures.
The sooner someone told me about the problem, the sooner I was able to
protect myself from the threat.
Full disclosure is important because vendors will drag their feet if
they're the only ones who know about it. Imagine you are a student and
have a paper due "whenever". When are you going to write the paper?
Today? No, you'll do it later. After all, nothing bad happens if you
don't do it, and not doing it is much easier than doing it.
Humans and software vendors are LAZY. If there's no reason to do
something, they won't do it. Full disclosure forces the issue and puts
everything out in the open. No "it'll be ready in 90 days" stalling.
It will be ready NOW or users will look to more secure alternatives.
They will make the first move and choose a program that doesn't have
security problems to begin with. This is better for everyone (well,
except people with financial interest in selling crappy software. that
doesn't particularly upset me, though.)
I do have more sympathy for open source developers. They are not
trying to profit from the security of their software, so I think they
deserve a little leeway. BUT, fixes are usually contributed by outside
experts. The experts can't just guess "Oh, I bet Person A of the NASM
project needs help with security problems. I'll send him an email and
ask if he needs help." They need to know about the vulnerability
before they can attempt to fix it. If they're reading a full
disclosure mailing list, they'll know about the problem. Then they can
code up a fix, email it to the author, and bang, it's fixed. It all
starts with full disclosure, though. (Without full disclosure, the
author of the software would be on his own to fix it. With full
disclosure, someone more experienced can help him out. That's a Good
Thing and is what makes the Open Source movement work.)
Full disclosure makes the Internet more secure. It forces vendors to
fix their broken software, and it forces users to update their broken
software. Less broken-ness is good for everybody.
If you disagree, you are probably writing broken software and are
afraid of what your users will do to you when they find out about it.
Good luck with that, and remember: don't shoot the messenger. If you
wrote the buggy code, you have only yourself to blame.
Regards,
--
Jonathan Rockway <jrockw2@....edu>
Powered by blists - more mailing lists