[<prev] [next>] [day] [month] [year] [list]
Message-ID: <ILEPILDHBOLAHHEIMALBEEGEELAA.jasonc@science.org>
From: jasonc at science.org (Jason Coombs)
Subject: FW: Response to David Litchfield on Responsible Disclosure and Infosec Research
-----Original Message-----
From: Jason Coombs [mailto:jasonc@...ence.org]
Sent: Wednesday, January 29, 2003 12:52 PM
To: David Litchfield [david@...software.com]
Cc: bugtraq@...urityfocus.com
Subject: Response to David Litchfield on Responsible Disclosure and
Infosec Research
Aloha, David.
Please continue to publish proof of concept sample exploit code and disclose
the details of vulnerabilities that you discover or analyze. The public
receives little or no security benefit from keeping knowledge obscure, and
closed source (secret) analysis of mistakes from the past guarantees that
those same mistakes will be made by others in the future.
Sapphire was a gem. With 376 bytes this worm attached a marker that screamed
"insecure" to every computer it infected, creating a worldwide information
security reponse focused on precisely those boxes that most urgently needed
security hardening.
Sapphire could have destroyed data on each computer it entered; its author
chose not to make it do so: for this we may be lucky, or we may have
somebody patriotic to thank for calling this threat to our attention before
it got exploited by somebody else for the purpose of doing real harm.
Even if you had authored Sapphire in full self-replicating form, it is the
act of using the worm as a tool of attack that violates law. Disclosing its
source code or its packet capture/object code should very clearly violate no
laws. The fact that we are now faced with several laws in the United States
that might be leveraged by an aggressive prosecutor to turn this disclosure
into a violation of law is itself an urgent systemic vulnerability in need
of repair. Before anyone will believe this to be the case, we need at least
one example exploit. The Elcomsoft case was not it.
Technology suffers from inherent radical fallibility; not unlike us humans.
Our legal and business systems, every system we create, can be exploited and
manipulated to cause harm or accumulate unjust riches and power. A
well-informed and well-armed public is an essential defense against such
manipulative abuses. Every effort to curtail disclosure, and every
individual who concedes, pushes public engagement in and awareness of
security analysis deeper into obscurity and illegality.
Vulnerabilities, and the ways in which they are being exploited, must be
published openly. Security through obscurity does not work. When
vulnerabilities and exploits are kept secret, those at-risk are deprived of
the opportunity to build countermeasures. There are ALWAYS countermeasures
possible for any exploit. Nothing good comes from hiding threats from those
who can prevent damage to their computer systems if they knew of the threats
in advance.
Every attack involves at least two people, and in virtually every
circumstance of attack either one of those people could have prevented it in
advance. Often times attacks are made possible because the victim does not
understand the risk posed by their reliance on a system created by somebody
else. This is true of infosec vulnerability exploits just as it is true of
all other harms that people do to other people in the physical world. There
should be no doubt that supplying education and detailed technical training
freely and to all people increases the potential for security. Everyone
should analyze security threats that exist in the systems they rely on that
were created by other people -- whether those systems are based on computer
networks or based on laws and methods of government. The detailed technical
reasons for currency collapse and other geopolitical and financial threats
are always disclosed in full and analyzed completely by economists and
academics, and these events cause far more real-world damage than a simple
information security exploit. The idea that publishing detailed technical
descriptions of the vulnerabilities in a system of economics or the
practices of a financial market is in and of itself harmful simply because
somebody is enabled through this knowledge to exploit the vulnerabilities
misses the point that without disclosure there would be no chance of repair
and defense. Those social engineers (legislators, politicians, influential
business leaders, etc.) who are responsible for protecting against threats
to the systems to which they tend will have no opportunity to redesign
vulnerable systems if a period of time does not elapse during which the
vulnerability is discussed and analyzed widely.
Every time a vendor publishes a fix for a security bug, the fix itself
discloses the vulnerability in detail -- to black hats, malicious hackers,
and to information security experts who take the time to conduct full
forensic analysis of the changes from the previous version of software to
the bug-fixed version.
Attempting to keep vulnerabilities secret while simultaneously releasing bug
fixes for them is harmful to security. If secrecy worked, it would be far
better for vendors NOT to release bug fixes and for nobody to talk about
information security threats. Obviously this is not practical, or
reasonable, therefore full disclosure is the only solution.
We all know code and knowledge we release will be used to do harm if it is
possible for it to do so. If I were the prosecutor, I'd bring charges
against anyone who releases any code, and some who release knowledge, on the
basis that there was specific foreknowledge as to harmful uses. The Patriot
Act, DMCA, and Computer Fraud and Abuse Act would give me legal grounds to
do so -- as prosecutor it would be my job to exploit the law not interpret
or
judge it. There is every reason to believe that more prosecutors will push
forward with such cases against those who disclose vulnerability details and
in particular those who disclose proof of concept code.
Careful, thoughtful, and articulate full disclosure of vulnerabilities and
proof of concept publication is ethical and necessary. You must have a
detailed understanding of the DMCA, Patriot Act, and Computer Fraud and
Abuse Act to continue this practice in the U.S. Even if you think you
understand these laws clearly, study them in detail yourself just as you
would conduct any vulnerability or social engineering threat analysis and
consider the legal vulnerabilities they create for you as an infosec
researcher when they are combined aggressively to prosecute you for what you
have done. You do in fact put yourself at risk by disclosing vulnerabilities
and publishing proof of concept code. There just isn't sufficient case law
at this time for optimists to understand the risks. As a proper pessimist,
you are in a much better position to appreciate the worst case scenario --
Sapphire having been based on your published code may already be setting in
motion that worst case scenario for you. Hopefully not. Your only defense
under law is proof that your research and publication methods meet an
objective evidence test establishing beyond reasonable doubt that your acts
and the resulting disclosure were entirely consistent with information
security and/or cryptography research -- and urgently necessary to enable
proactive defense against attacks. Have this evidence ready just in case
it's needed.
Victims of exploits should report specific incidents of attacks they have
suffered because after the fact they are no longer vulnerable to this same
attack, but others still are, and sharing this information helps everyone to
defend themselves. Further, the information security community provides
assistance and advice to those who have become victims of exploits. The
quality of this open analysis of the reasons for and defenses to an exploit
or vulnerability is superior to paid information security consulting in
secret in virtually every case because more eyeballs make certain that every
nuance of a threat is articulated.
While some end-users were inconvenienced by the Sapphire worm, every
end-user benefits from the security hardening that occurs in response to
this type of exploit. The network is safer now, for everyone, than it was
just three days ago. Without the Sapphire worm, we would not have recognized
and repaired extremely important vulnerabilities in our global information
security infrastructures.
As for companies and critical infrastructure (e.g. ATMs, Continental
airlines, American Express, Microsoft) that go down in the face of worm
attacks that exploit vulnerabilities we disclose -- There is no execuse for
any important information system to rely exclusively on or be exposed
indirectly to the Internet. Failover routes must be in place for important
services; the Internet can be used safely for important business services
only when appropriate security and fault tolerance countermeasures are
implemented. The Sapphire worm was successfully blocked by the simplest and
most common information security device: the firewall. The fact that so many
networks were vulnerable to Sapphire illustrates blatent disregard for
security and proper fault tolerant system engineering design principles by
many organizations in favor of quick and simple worker productivity
solutions.
Full disclosure made directly to those who care enough about security to
read security alert and advisory documents like this one is an effective way
to communicate vulnerability details to those who most urgently need them
and who are most likely to act upon them.
THE MOST IMPORTANT MITIGATING FACTOR IS ALWAYS AN INFORMED POPULATION
THAT IS READY, WILLING, AND ABLE TO ACT WHEN ACTION IS REQUIRED TO ENSURE
THE SECURITY AND INTEGRITY OF INFORMATION SYSTEMS AND PROTECT STAKE-HOLDERS
WHO WOULD OTHERWISE BE BOTH AT-RISK AND UNINFORMED; IT IS IRRESPONSIBLE FOR
A SECURITY RESEARCHER TO TRUST SOMEBODY ELSE TO DISSEMINATE IMPORTANT
INFORMATION ABOUT NEW VULNERABILITIES AND IT IS FURTHER IRRESPONSIBLE FOR A
PERSON WHO KNOWS OF A SECURITY VULNERABILITY TO KEEP IT SECRET FOR A
PROLONGED PERIOD OF TIME IN THE IRRATIONAL AND NARCISSISTIC HOPE THAT NOBODY
ELSE IS SMART ENOUGH TO FIND THE SAME VULNERABILITY.
A small, highly-skilled and diligent distributed group of
self-coordinating, self-organizing infosec experts who know each other and
communicate freely is far more capable of responding to security incidents
and moving forward any and all preventative measures necessary to minimize
the security risk and imminent dangers of any infosec threat than are dozens
of people and organizations compromised by politics and fear. To ensure
continued high-quality, timely, and accurate vulnerability disclosure
requires peer-reviewed communication free from restrictive and oppressive
forces. Those who pose a threat to information security have this freedom to
communicate because they take it or they make it even though it requires
them to take personal risk. For information security professionals and the
United States Government to deny themselves, their employees, and citizens
this same freedom as a defense against attack is not only counter-productive
it is also insane.
The DMCA section 1201 "Circumvention of copyright protection systems"
also includes provisions for "PERMISSIBLE ACTS OF ENCRYPTION RESEARCH".
There should be no concern on the part of any security researcher or
cryptographer that communicating the results of an ethical information
security analysis might result in arrest and prosecution for violation of
the DMCA or other laws. THE DMCA DOES NOT TRUMP THE FIRST AMENDMENT.
Like an IETF Working Group or an open source or free software/GNU
development effort, anyone who wishes to do so and who has something of
value to contribute can contact infosec peers and solicit the forensic
analysis help of any other security coordinator, infosec, or forensics
expert without fear of prosecution for criminal conspiracy. In practice,
contacting a vendor expecting forensic analysis assistance is futile;
vendors will take a new vulnerability report and conduct their own forensic
analysis but won't reveal additional aspects of a vulnerability to you
because you are untrustworthy. The vendor has no incentive to spread
vulnerability information and you have no "need to know" more than the
vendor chooses to tell you about the scope of the vulnerability you
discovered. Entrusting vendors with exclusive possession of vulnerability
details is counterproductive to the desired end-result of secure information
systems and properly hardened security policies; the analysis capabilities
of security researchers who are not restricted by employment contracts,
confidentiality agreements, and other impairments are superior in every
respect and in every instance thus far examined by this author.
Sincerely,
Jason Coombs
jasonc@...ence.org
Powered by blists - more mailing lists