[<prev] [next>] [thread-next>] [day] [month] [year] [list]
Message-ID: <200507151835.j6FIZqxu007856@linus.mitre.org>
Date: Fri Jul 15 19:36:01 2005
From: coley at mitre.org (Steven M. Christey)
Subject: Why Vulnerability Databases can't do everything
Regarding a particular vulnerability database, Xavier Beaudouin
<kiwi@....net> said:
>They push advisory without testing and respect the usual way to inform
>developper as it should.
(name omitted simply because it could have been about any vuln
database.)
No doubt a lot of what I'm about to say was covered by Brian Martin at
CanSecWest this year, however...
Vulnerability databases and notification services have to pore through
approximately 100 new public vulnerability reports a week.
Correction: that's HUNDREDS of reports, from diverse and often
unproven sources, for about 100 unique vulnerabilities per week.
A LARGE number of vendors and maintainers either:
(1) are unresponsive to email inquiries (about half my emails go
unanswered, about 20% of the ones that do answer, don't answer
my questions)
(2) make you register or require you to be a customer to access
their product "support"
(3) don't have good contact information in the first place
(4) don't want to tell you anything about whether they've fixed a
publicly reported vuln or not, for fear of giving out too much
information.
(5) sometimes require hand-holding if they don't understand the vuln
report
In addition, vulnerability databases and notification services have
to:
(1) navigate through large numbers of poorly written researcher
advisories that are riddled with mistakes, which often were not
coordinated with the vendor in the first place (possibly due to
the own researcher's troubles contacting the vendors).
(2) somehow refine and present this stuff in a usable format for the
consumer
(3) where possible, obtain better information either by researching
the issue themselves or contacting the vendor
Most advisories, whether they come from researchers, vendors, or third
parties, suffer from one or more of the "Four I's" problems:
- Incomplete
- Inaccurate
- Inconsistent
- Incomprehensible
And think about what would be required for testing every claim - 100
vulnerabilities per week, many of them in commercial software, across
every conceivable platform, OS, and execution environment. Who has
the labs and the resources to cover all that? Nobody. Absolutely
nobody.
You're talking a 10 million dollar investment AT LEAST just for a lab
that would cover major versions of the most popular software, and that
probably excludes the labor for coordinating with vendors or
performing verification.
And this is happening in a context where:
- consumers want perfect information
- they want it the moment an issue becomes public
- they don't want to pay a lot for it
(which makes me think of the sign on my office door: "Vulnerabiltiy
information: fast, cheap, or good. Pick any two.")
In other words, it's just not possible to fully evaluate and verify
every single public vulnerability report. So, you prioritize and do
what you can with the available information.
Every VDB and notification service that I'm aware of absolutely HATES
having bad information. They will GLADLY post corrections when they
are notified. And, hopefully, they can share this information with
other VDB's. OSVDB and CVE have begun to do just that, and the result
is an improvement in the quality of both databases and, consequently,
better information for all their consumers.
Despite all the criticism of VDB's and notification services, they do
a lot of work behind the scenes that few people seem to fully
appreciate. By no means are they perfect, but you can't create
perfection out of chaos.
- Steve
Powered by blists - more mailing lists